Open Virtual Switch Open vSwitch (OVS) is an open-source multilayer virtual switch designed to be used in virtualized environments to forward traffic between different VMs on the same host, and also between VMs and the physical network. Native OVS forwarding is handled by two major components: a user-space daemon called ovs-vswitchd and a fastpath kernel module used to accelerate the data path. The fastpath kernel module will handle packets received on the NIC by simply consulting a flow table with corresponding action rules (e.g to forward the packet or modify its headers). If no matching entry is found in the flow table, the packet is copied to the user-space and sent to the ovs-vswitchd deamon which determines how it should be handled ("slowpath"). The packet is then passed back to the kernel module together with the desired action and the flow table is updated, so that subsequent packets in the same flow can be handled in fastpath without any user-space interaction. In this way, OVS eliminates a lot of the context switching between kernel-space and user-space, but the throughput is still limited by the capacity of the Linux kernel stack.
OVS-DPDK To improve performance, OVS supports integration with Intel DPDK libraries to operate entirely in user-space (OVS-DPDK). DPDK Poll Mode Drivers (PMDs) enable direct transfers of packets between the physical NIC and user-space, thereby eliminating the overhead of interrupt handling and Linux kernel network stack processing. OVS-DPDK provides DPDK-backed vhost-user ports as the primary way to connect guests to this datapath. The vhost-user interfaces are transparent to the guest.
OVS commands OVS provides a rich set of command line management tools, most importantly: ovs-vsctl: Used to manage and inspect switch configurations, e.g. to create bridges and to add/remove ports. ovs-ofctl: Used to configure and monitor flows. For more information about Open vSwitch, see http://openvswitch.org.
Configuring OVS-DPDK for improved performance
dpdk-lcore-mask Specifies the CPU core affinity for DPDK lcore threads. The lcore threads are used for DPDK library tasks. For performance it is best to set this to a single core on the system, and it should not overlap the pmd-cpu-mask, as seen in the example below. Example: To use core 1: ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1
pmd-cpu-mask The DPDK PMD threads polling for incoming packets are CPU bound and should be pinned to isolated cores for optimal performance. If OVS-DPDK receives traffic on multiple ports, for example when DPDK and vhost-user ports are used for bi-directional traffic, the performance can be significantly improved by creating multiple PMD threads and affinitizing them to separate cores in order to share the workload, by each being responsible for an individual port. The cores should not be hyperthreads on the same CPU. The PMD core affinity is specified by setting an appropriate core mask. Example: using cores 2 and 3: ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
How to set up OVS-DPDK The DPDK must be configured prior to setting up OVS-DPDK. See [FIXME] for DPDK setup instructions, then follow these steps: Clean up the environment: killall ovsdb-server ovs-vswitchd rm -f /var/run/openvswitch/vhost-user* rm -f /etc/openvswitch/conf.db Start the ovsdb-server: export DB_SOCK=/var/run/openvswitch/db.sock ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema ovsdb-server --remote=punix:$DB_SOCK / --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach Start ovs-vswitchd with DPDK support enabled: ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1 ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true ovs-vswitchd unix:$DB_SOCK --pidfile --detach / --log-file=/var/log/openvswitch/ovs-vswitchd.log Create the OVS bridge and attach ports: ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk / :dpdk-devargs=<PCI device> Add DPDK vhost-user ports: ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser This command creates a socket at /var/run/openvswitch/vhost-user1, which can be provided to the VM on the QEMU command line. See [FIXME] for details. Define flows: ovs-ofctl del-flows ovsbr0 ovs-ofctl show ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1