From 9d3e535588314681acd6e46e3de640290bd6ef0c Mon Sep 17 00:00:00 2001 From: Gabriel Ionescu Date: Fri, 8 Dec 2017 11:56:28 +0100 Subject: Chapter 7: Update benchmarks chapter for Cavium board Signed-off-by: Gabriel Ionescu --- doc/book-enea-nfv-access-guide/doc/benchmarks.xml | 1645 ++++++++++----------- 1 file changed, 742 insertions(+), 903 deletions(-) (limited to 'doc/book-enea-nfv-access-guide/doc') diff --git a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml index 3279601..def0f89 100644 --- a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml +++ b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml @@ -14,7 +14,7 @@ Hardware Setup - + @@ -28,27 +28,25 @@ Server Platform - Supermicro X10SDV-4C-TLN2F - http://www.supermicro.com/products/motherboard/xeon/d/X10SDV-4C-TLN2F.cfm + Cavium CN8304 ARCH - x86-64 + aarch64 Processor - 1 x Intel Xeon D-1521 (Broadwell), 4 cores, 8 - hyper-threaded cores per processor + Cavium OcteonTX CN83XX CPU freq - 2.40 GHz + 1.8 GHz @@ -60,13 +58,13 @@ Network - Dual integrated 10G ports + 3x10G ports Storage - Samsung 850 Pro 128GB SSD + Seagate 500GB HDD @@ -82,155 +80,6 @@ -
- BIOS Settings - - The table below details the BIOS settings for which the default - values were changed when doing performance measurements. - - - BIOS Settings - - - - - - - Menu Path - - Setting Name - - Enea NFV Access value - - BIOS Default value - - - - - - CPU Configuration - - Direct Cache Access (DCA) - - Enable - - Auto - - - - CPU Configuration / Advanced Power Management - Configuration - - EIST (P-States) - - Disable - - Enable - - - - CPU Configuration / Advanced Power Management Configuration - / CPU C State Control - - CPU C State - - Disable - - Enable - - - - CPU Configuration / Advanced Power Management Configuration - / CPU Advanced PM Turning / Energy Perf BIAS - - Energy Performance Tuning - - Disable - - Enable - - - - CPU Configuration / Advanced Power Management Configuration - / CPU Advanced PM Turning / Energy Perf BIAS - - Energy Performance BIAS Setting - - Performance - - Balanced Performance - - - - CPU Configuration / Advanced Power Management Configuration - / CPU Advanced PM Turning / Energy Perf BIAS - - Power/Performance Switch - - Disable - - Enable - - - - CPU Configuration / Advanced Power Management Configuration - / CPU Advanced PM Turning / Program PowerCTL _MSR - - Energy Efficient Turbo - - Disable - - Enable - - - - Chipset Configuration / North Bridge / IIO - Configuration - - EV DFX Features - - Enable - - Disable - - - - Chipset Configuration / North Bridge / Memory - Configuration - - Enforce POR - - Disable - - Enable - - - - Chipset Configuration / North Bridge / Memory - Configuration - - Memory Frequency - - 2400 - - Auto - - - - Chipset Configuration / North Bridge / Memory - Configuration - - DRAM RAPL Baseline - - Disable - - DRAM RAPL Mode 1 - - - -
-
-
Use Cases @@ -251,7 +100,7 @@ - pktgen TX, RX in packets per second (pps) and Mbps + pktgen TX, RX in packets per second (pps) and MBps @@ -269,25 +118,21 @@ Start by following the steps below: - SSD boot using the following grub.cfg - entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1 + Boot the board using the following U-Boot commands: + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_board - Kill unnecessary services:killall ovsdb-server ovs-vswitchd -rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchMount hugepages and configure - DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Run + Configure hugepages and set up DPDK:echo 4 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind -b vfio-pci 0001:01:00.1Run pktgen:cd /usr/share/apps/pktgen/ -./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"In the pktgen console - run:strTo change framesize for - pktgen, from [64, 128, 256, 512]:set 0 size &lt;number&gt; +./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \ +-w 0001:01:00.1 -- -P -m "[1:2].0"In the pktgen console + run:strChoose one of the values + from [64, 128, 256, 512] to change the packet size:set 0 size <number>
@@ -295,64 +140,64 @@ dpdk-devbind --bind=igb_uio 0000:03:00.0Run Start by following the steps below: - SSD boot using the following grub.cfg - entry: + Boot the board using the following U-Boot commands: - linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1 + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_board + + It is expected that a NFV Access guest image is present on the + target. - It is expected to have Docker/guest image on target. Configure - the OVS bridge:# OVS old config clean-up + Set up DPDK and configure the OVS bridge:# Clean up old OVS old config killall ovsdb-server ovs-vswitchd rm -rf /etc/openvswitch/* +rm -rf /var/run/openvswitch/* +rm -rf /var/log/openvswitch/* mkdir -p /var/run/openvswitch -# Mount hugepages and bind interfaces to dpdk -mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0 +# Configure hugepages and bind interfaces to dpdk +echo 20 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind --b vfio-pci 0001:01:00.1 # configure openvswitch with DPDK export DB_SOCK=/var/run/openvswitch/db.sock -ovsdb-tool create /etc/openvswitch/conf.db / -/usr/share/openvswitch/vswitch.ovsschema -ovsdb-server --remote=punix:$DB_SOCK / ---remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach +ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema +ovsdb-server --remote=punix:$DB_SOCK \ + --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true -ovs-vswitchd unix:$DB_SOCK --pidfile --detach / ---log-file=/var/log/openvswitch/ovs-vswitchd.log +ovs-vswitchd unix:$DB_SOCK --pidfile --detach \ + --log-file=/var/log/openvswitch/ovs-vswitchd.log ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev -ovs-vsctl add-port ovsbr0 vhost-user1 / --- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1 -ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface / -dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=2 +ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 \ + type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=2 +ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ + options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1 # configure static flows ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Import a - Docker container:docker import enea-nfv-access-guest-qemux86-64.tar.gz el7_guestStart - the Docker container:docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ / --v /mnt/huge:/mnt/huge el7_guest /bin/bashStart the testpmd - application in Docker:testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci / ---vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 / --d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan / ---disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 / ---rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedTo + Docker container:docker import enea-nfv-access-guest-qemuarm64.tar.gz nfv_containerStart + the Docker container:docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \ + -v /dev/hugepages/:/dev/hugepages/ -p nfv_container /bin/bashStart + the testpmd application in Docker:testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 \ + --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 \ + -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \ + --disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 --rxq=1 \ + --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedTo start traffic forwarding, run the following command in testpmd CLI:startTo start traffic but in termination - mode (no traffic sent on TX), run following command in testpmd + mode (no traffic sent on TX), run the following commands in testpmd CLI:set fwd rxonly startResults in forwarding mode @@ -389,56 +234,92 @@ start
64 - 14877658 + 14682363 - 9997 + 9867 - 7832352 + 1666666 - 5264 + 1119 - 7831250 + 1976488 - 7831250 + 1666694 - 52,65% + 13.46% 128 - 8441305 + 8445993 - 9994 + 10000 - 7533893 + 1600567 - 8922 + 1894 - 7535127 + 1886851 - 7682007 + 1600573 - 89,27% + 22.34% 256 - 4528831 + 4529011 - 9999 + 10000 + + 1491449 + + 3292 + + 1715763 + + 1491445 + + 37.88% + + + + 512 + + 2349638 + + 10000 + + 1422338 - 4528845 + 6052 + + 1555351 + + 1422330 + + 66.20% + + + + 1024 + + 1197323 + + 10000 + + 1197325 9999 - 4528738 + 1197320 - 4528738 + 1197320 - 100% + 100.00% @@ -465,32 +346,52 @@ start
64 - 14877775 + 14676922 - 8060974 + 1984693 - 54,1% + 13.52% 128 - 8441403 + 8445991 - 8023555 + 1895099 - 95,0% + 22.44% 256 - 4528864 + 4528379 - 4528840 + 1722004 - 99,9% + 38.03% + + + + 512 + + 2349639 + + 1560988 + + 66.44% + + + + 1024 + + 1197325 + + 1197325 + + 100.00% @@ -503,56 +404,56 @@ start
hostBenchmark a combo test using testpmd running in two Docker - instances, one which Forwards traffic to the second one, which - Terminates it. + instances, one which forwards traffic to the second one, which + terminates it. - Packets are generated with pktgen and TX-d to the first testpmd, - which will RX and Forward them to the second testpmd, which will RX - and terminate them. + Packets are generated with pktgen and transmitted to the first + testpmd instance, which will forward them to the second testpmd + instance, which terminates them. - Measurements are made in: + This test measures: - pktgen TX in pps and Mbits/s + pktgen TX, RX in packets per second (pps) and MBps - testpmd TX and RX pps in Docker1 + testpmd TX, RX in packets per second in the first Docker + container - testpmd RX pps in Docker2 + testpmd TX, RX in packets per second in the second Docker + container - - Throughput found as a percent, by dividing Docker2 testpmd RX pps by pktgen - TX pps. + + divide testpmd RX pps for the second Docker container to + pktgen TX pps to obtain throughput in percentages (%) + +
Test Setup for Target 1 Start by following the steps below: - SSD boot using the following grub.cfg - entry: + Boot the board using the following U-Boot commands: - linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1 + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_board - Configure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Run + Configure hugepages and set up DPDK:echo 4 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind -b vfio-pci 0001:01:00.1Run pktgen:cd /usr/share/apps/pktgen/ -./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"Choose one of the - values from [64, 128, 256, 512] to change the packet - size:set 0 size <number> +./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \ +-w 0001:01:00.1 -- -P -m "[1:2].0"Choose one of the values + from [64, 128, 256, 512] to change the packet size:set 0 size <number>
@@ -563,73 +464,79 @@ dpdk-devbind --bind=igb_uio 0000:03:00.0Run SSD boot using the following grub.cfg entry: - linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 / -iommu=pt intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1 + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_board - killall ovsdb-server ovs-vswitchd + Set up DPDK and configure the OVS bridge:# Clean up old OVS old config +killall ovsdb-server ovs-vswitchd rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Configure the OVS - bridge:export DB_SOCK=/var/run/openvswitch/db.sock -ovsdb-tool create /etc/openvswitch/conf.db / -/usr/share/openvswitch/vswitch.ovsschema -ovsdb-server --remote=punix:$DB_SOCK / ---remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach +rm -rf /var/run/openvswitch/* +rm -rf /var/log/openvswitch/* +mkdir -p /var/run/openvswitch + +# Configure hugepages and bind interfaces to dpdk +echo 20 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind --b vfio-pci 0001:01:00.1 + +# configure openvswitch with DPDK +export DB_SOCK=/var/run/openvswitch/db.sock +ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema +ovsdb-server --remote=punix:$DB_SOCK \ + --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 -ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xcc +ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true -ovs-vswitchd unix:$DB_SOCK --pidfile --detach / ---log-file=/var/log/openvswitch/ovs-vswitchd.log +ovs-vswitchd unix:$DB_SOCK --pidfile --detach \ + --log-file=/var/log/openvswitch/ovs-vswitchd.log + ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev -ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface / -vhost-user1 type=dpdkvhostuser ofport_request=1 -ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface / -vhost-user2 type=dpdkvhostuser ofport_request=2 -ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 / -type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=3 +ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 \ + type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=1 +ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 \ + type=dpdkvhostuser -- set Interface vhost-user2 ofport_request=2 +ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ + options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=3 + +# configure static flows ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2 ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Import a - Docker container:docker import enea-nfv-access-guest-qemux86-64.tar.gz el7_guestStart - the first Docker:docker run -it --rm --cpuset-cpus=4,5 / --v /var/run/openvswitch/:/var/run/openvswitch/ / --v /mnt/huge:/mnt/huge el7_guest /bin/bashStart the testpmd - application in Docker1:testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci / ---vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 / --d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan / ---disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 / ---rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedConfigure + Docker container:docker import enea-nfv-access-guest-qemuarm64.tar.gz nfv_containerStart + the first Docker container:docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \ + -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bashStart + testpmd in the first Docker container:testpmd -c 0x300 -n 4 --file-prefix prog2 --socket-mem 512 \ + --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 \ + -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \ + --disable-rss -i --portmask=0x1 --coremask=0x200 --nb-cores=1 --rxq=1 \ + --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedConfigure it in termination mode:set fwd rxonlyRun the testpmd application:startOpen a new console to the host and start the second Docker - instance:docker run -it --rm --cpuset-cpus=0,1 -v /var/run/openvswitch/:/var/run/openvswitch/ / --v /mnt/huge:/mnt/huge el7_guest /bin/bashIn the second - container start testpmd:testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci / + instance:docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \ + -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bashIn + the second container start testpmd:testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci / --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 / --d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlanRun - the TestPmd application in the second Docker:testpmd -c 0x3 -n 2 --file-prefix prog2 --socket-mem 512 --no-pci / ---vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 / --d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan / ---disable-rss -i --portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 / ---txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedIn +-d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlanStart + testpmd in the second Docker container:testpmd -c 0x30 -n 4 --file-prefix prog1 --socket-mem 512 \ + --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \ + -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \ + --disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 --rxq=1 \ + --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedIn the testpmd shell, run:startStart pktgen traffic by running the following command in pktgen CLI:start 0To record traffic - results:show port stats 0This - should be used in testpmd applications. + results, run:show port stats 0
Results - + Import a Target 2 - (termination) testpmd pps RX + + Throughput + (%) 64 - 14877713 + 14683140 + + 1979807 - 5031270 + 1366712 - 5031214 + 1366690 - 5031346 + 9.31% 128 - 8441271 + 8446005 - 4670165 + 1893514 - 4670165 + 1286628 - 4670261 + 1286621 + + 15.23% 256 - 4528844 + 4529011 + + 1716427 - 4490268 + 1140234 - 4490268 + 1140232 - 4490234 + 25.18% 512 - 2349458 - - 2349553 + 2349638 - 2349553 + 1556898 - 2349545 - - - -
-
- - -
- SR-IOV in in Docker - - PCI passthrough tests using pktgen and testpmd in Docker. - - pktgen[DPDK]Docker - PHY - Docker[DPDK] testpmd - - Measurements: - - - - RX packets per second in testpmd (with testpmd configured in - rxonly mode). - - - -
- Test Setup - - Boot Enea NFV Access from SSD:linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable clocksource=tsc / -tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 processor.max_cstate=0 / -mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt intel_iommu=on hugepagesz=1GB / -hugepages=8 default_hugepagesz=1GB hugepagesz=2M hugepages=2048 / -vfio_iommu_type1.allow_unsafe_interrupts=1lAllow unsafe - interrupts:echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interruptsConfigure - DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -dpdk-devbind.py --bind=ixgbe 0000:03:00.0 -ifconfig eno3 192.168.1.2 -echo 2 > /sys/class/net/eno3/device/sriov_numvfs -modprobe vfio-pci -dpdk-devbind.py --bind=vfio-pci 0000:03:10.0 -dpdk-devbind.py --bind=vfio-pci 0000:03:10.2Start two docker - containers:docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ / ---device /dev/vfio/vfio el7_guest /bin/bash -docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ / ---device /dev/vfio/vfio el7_guest /bin/bashIn the first - container start pktgen:cd /usr/share/apps/pktgen/ -./pktgen -c 0x1f -w 0000:03:10.0 -n 1 --file-prefix pg1 / ---socket-mem 1024 -- -P -m "[3:4].0"In the pktgen prompt set - the destination MAC address:set mac 0 XX:XX:XX:XX:XX:XX -strIn the second container start testpmd:testpmd -c 0x7 -n 1 -w 0000:03:10.2 -- -i --portmask=0x1 / ---txd=256 --rxd=256 --port-topology=chainedIn the testpmd - prompt set forwarding - rxonly:set fwd rxonly -start - Results - - - - - Bytes - - pktgen pps - TX - - testpmd pps - RX - - pktgen MBits/s - TX + 1016661 - throughput - (%) - + 1016659 - - 64 - - 14204211 - - 14204561 - - 9545 - - 100 - - - - 128 + 43.27% + - 8440340 + + 1024 - 8440201 + 1197326 - 9993 + 1197319 - 99.9 - + 869654 - - 256 + 869652 - 4533828 + 72.63% + - 4533891 + + 1500 - 10010 + 822373 - 100 - + 822369 - - 512 + 760826 - 2349886 + 760821 - 2349715 - - 10000 - - 99.9 - - - -
+ 92.52% + + + +
@@ -836,16 +659,16 @@ startBenchmarking traffic (UDP) forwarding and termination using testpmd in a virtual machine. - The Pktgen application is used to generate traffic that will - reach testpmd running on a virtual machine, and be forwarded back to - source on the return trip. With the same setup a second measurement + The pktgen application is used to generate traffic that will + reach testpmd running in a virtual machine, from where it will be + forwarded back to source. Within the same setup, a second measurement will be done with traffic termination in the virtual machine. This test case measures: - pktgen TX, RX in packets per second (pps) and Mbps + pktgen TX, RX in packets per second (pps) and MBps @@ -864,24 +687,20 @@ start
Start with the steps below: - SSD boot using the following grub.cfg - entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1 + Boot the board using the following U-Boot commands: + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_board - Kill unnecessary services: killall ovsdb-server ovs-vswitchd -rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Run + Configure hugepages and set up DPDK:echo 4 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind -b vfio-pci 0001:01:00.1Run pktgen:cd /usr/share/apps/pktgen/ -./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 / --w 0000:03:00.0 -- -P -m "[1:2].0"Set pktgen frame size to - use from [64, 128, 256, 512]:set 0 size 64 +./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \ +-w 0001:01:00.1 -- -P -m "[1:2].0"Choose one of the values + from [64, 128, 256, 512] to change the packet size:set 0 size <number>
@@ -889,76 +708,137 @@ dpdk-devbind --bind=igb_uio 0000:03:00.0Run Start by following the steps below: - SSD boot using the following grub.cfg - entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill - unnecessary services: killall ovsdb-server ovs-vswitchd + Boot the board using the following U-Boot commands: + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_boardKill unnecessary services: killall ovsdb-server ovs-vswitchd rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Configure +rm -rf /var/run/openvswitch/* +mkdir -p /var/run/openvswitchConfigure hugepages, set up + DPDK:echo 20 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +dpdk-devbind --bind=vfio-pci 0001:01:00.1Configure OVS:export DB_SOCK=/var/run/openvswitch/db.sock -ovsdb-tool create /etc/openvswitch/conf.db / -/usr/share/openvswitch/vswitch.ovsschema -ovsdb-server --remote=punix:$DB_SOCK / ---remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach +ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema +ovsdb-server --remote=punix:$DB_SOCK \ + --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true -ovs-vswitchd unix:$DB_SOCK --pidfile --detach / ---log-file=/var/log/openvswitch/ovs-vswitchd.log +ovs-vswitchd unix:$DB_SOCK --pidfile --detach \ + --log-file=/var/log/openvswitch/ovs-vswitchd.log -ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev -ovs-vsctl add-port ovsbr0 vhost-user1 / --- set Interface vhost-user1 type=dpdkvhostuser -- set Interface / -vhost-user1 ofport_request=2 -ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 / -type=dpdk options:dpdk-devargs=0000:03:00.0 / --- set Interface dpdk0 ofport_request=1 -chmod 777 /var/run/openvswitch/vhost-user1 +ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 \ + datapath_type=netdev +ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface \ + vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=2 +ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 \ + type=dpdk options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1 ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 -ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Launch - QEMU:taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no / --M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 / --enable-kvm -nographic -realtime mlock=on -kernel /mnt/qemu/bzImage / --drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,/ -if=virtio,format=raw -m 4096 -object memory-backend-file,id=mem,/ -size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem / --mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 / --netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce / --device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/ -mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/ -guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 / -hugepagesz=2M hugepages=1024 isolcpus=1 nohz_full=1 rcu_nocbs=1 / -irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0'Inside QEMU, - configure DPDK: mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:00:02.0Inside QEMU, run - testpmd: testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 / --- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 / ---coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 --rxd=512 / ---txqflags=0xf00 --port-topology=chainedFor the Forwarding test, start testpmd - directly:startFor the Termination test, set testpmd to only - receive, then start it:set fwd rxonly +ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Create an + XML file with the content below (e.g. + /home/root/guest.xml):<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> + <name>nfv-ovs-vm</name> + <uuid>ed204646-1ad5-11e7-93ae-92361f002671</uuid> + <memory unit='KiB'>4194304</memory> + <currentMemory unit='KiB'>4194304</currentMemory> + + <memoryBacking> + <hugepages> + <page size='512' unit='M' nodeset='0'/> + </hugepages> + </memoryBacking> + + <os> + <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> + <kernel>Image</kernel> + <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> + <boot dev='hd'/> + </os> + + <features> + <acpi/> + <apic/> + </features> + + <vcpu placement='static'>2</vcpu> + + <cpu mode='host-model'> + <model fallback='allow'/> + <topology sockets='1' cores='2' threads='1'/> + <numa> + <cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/> + </numa> + </cpu> + + <cputune> + <vcpupin vcpu="0" cpuset="4"/> + <vcpupin vcpu="1" cpuset="5"/> + </cputune> + + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + + <devices> + <emulator>/usr/bin/qemu-system-aarch64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source file='enea-nfv-access-guest-qemuarm64.ext4'/> + <target dev='vda' bus='virtio'/> + </disk> + + <serial type='pty'> + <target port='0'/> + </serial> + + <console type='pty'> + <target type='serial' port='0'/> + </console> + </devices> + + <qemu:commandline> + <qemu:arg value='-chardev'/> + <qemu:arg value='socket,id=charnet0,path=/var/run/openvswitch/vhost-user1'/> + + <qemu:arg value='-netdev'/> + <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> + + <qemu:arg value='-device'/> + <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> + </qemu:commandline> +</domain> + + Start the virtual machine, by running: + + virsh create /home/root/guest.xml + + Connect to the virtual machines console: + + virsh console nfv-ovs-vm + + Inside the VM, configure DPDK: ifconfig enp0s2 down +echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode +modprobe vfio-pci +dpdk-devbind -b vfio-pci 0000:00:02.0Inside the VM, start + testpmd: testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \ +-w 0000:00:02.0 -- -i --disable-hw-vlan-filter --no-flush-rx \ +--port-topology=chainedFor the Forwarding test, run:set fwd io +startFor the Termination + test, set testpmd to only receive, then start + it:set fwd rxonly startOn target 1, you may start pktgen traffic now:start 0On target 2, use this - command to refresh the testpmd display and note the highest - values:show port stats 0To stop - traffic from pktgen, in order to choose a different frame - size:stop 0To clear numbers in + command to refresh the testpmd display traffic + statistics:show port stats 0To stop + generating traffic in order to choose a different frame size, + run:stop 0To clear numbers in testpmd:clear port stats show port stats 0
Results in forwarding mode @@ -995,56 +875,92 @@ show port stats 0
64 - 7926325 + 1555163 - 14877576 + 14686542 - 7926515 + 1978791 - 7926515 + 1554707 - 5326 + 1044 - 9997 + 9867 - 53.2 + 13.47% 128 - 7502802 + 1504275 - 8441253 + 8447999 - 7785983 + 1901468 - 7494959 + 1504266 - 8883 + 1781 - 9994 + 10000 - 88.8 + 22.51% 256 - 4528631 + 1423564 - 4528782 + 4529012 - 4529515 + 1718299 - 4529515 + 1423553 - 9999 + 3142 + + 10000 + + 37.94% + + + + 512 + + 1360379 + + 2349636 + + 1554844 + + 1360456 + + 5789 + + 10000 + + 66.17% + + + + 1024 + + 1197327 + + 1197329 + + 1197319 + + 1197329 9999 - 99.9 + 10000 + + 100.00% @@ -1074,38 +990,62 @@ show port stats 0
64 - 14877764 + 14695621 - 8090855 + 1983227 - 9997 + 9875 - 54.3 + 13.50% 128 - 8441309 + 8446022 + + 1897546 + + 10000 + + 22.47% + + + + 256 + + 4529011 - 8082971 + 1724323 - 9994 + 10000 - 95.7 + 38.07% + + + + 512 + + 2349638 + + 1562212 + + 10000 + + 66.49% 256 + role="bold">1024 - 4528867 + 1197323 - 4528780 + 1197324 - 9999 + 10000 - 99.9 + 100.00% @@ -1123,7 +1063,7 @@ show port stats 0
- pktgen TX in pps and Mbits/s + pktgen TX in pps and MBps @@ -1135,9 +1075,9 @@ show port stats 0
- throughput in percents, by dividing - VM2 testpmd RX pps by pktgen TX - pps + divide VM2 testpmd RX pps + by pktgen TX pps to obtain the + throughput in percentages (%) @@ -1146,23 +1086,19 @@ show port stats 0
Start by doing the following: - SSD boot using the following grub.cfg - entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill - Services:killall ovsdb-server ovs-vswitchd -rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Run + Boot the board using the following U-Boot commands: + setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ +rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ +nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' +run boot_boardConfigure hugepages and set up + DPDK:echo 4 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +ifconfig enP1p1s0f1 down +dpdk-devbind -b vfio-pci 0001:01:00.1Run pktgen:cd /usr/share/apps/pktgen/ -./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 / --w 0000:03:00.0 -- -P -m "[1:2].0"Set pktgen frame size to - use from [64, 128, 256, 512]:set 0 size 64 +./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \ +-w 0001:01:00.1 -- -P -m "[1:2].0"Choose one of the values + from [64, 128, 256, 512] to change the packet size:set 0 size <number>
@@ -1179,83 +1115,210 @@ intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill Services:killall ovsdb-server ovs-vswitchd rm -rf /etc/openvswitch/* -mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:03:00.0Configure +mkdir -p /var/run/openvswitchConfigure hugepages, set up + DPDK:echo 20 > /proc/sys/vm/nr_hugepages +modprobe vfio-pci +dpdk-devbind --bind=vfio-pci 0001:01:00.1Configure OVS:export DB_SOCK=/var/run/openvswitch/db.sock -ovsdb-tool create /etc/openvswitch/conf.db / -/usr/share/openvswitch/vswitch.ovsschema -ovsdb-server --remote=punix:$DB_SOCK / ---remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach +ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema +ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ + --pidfile --detach ovs-vsctl --no-wait init ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true -ovs-vswitchd unix:$DB_SOCK --pidfile / ---detach --log-file=/var/log/openvswitch/ovs-vswitchd.log - +ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev -ovs-vsctl add-port ovsbr0 dpdk0 / --- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1 -ovs-vsctl add-port ovsbr0 vhost-user1 / --- set Interface vhost-user1 type=dpdkvhostuser ofport_request=2 -ovs-vsctl add-port ovsbr0 vhost-user2 / --- set Interface vhost-user2 type=dpdkvhostuser ofport_request=3 - +ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ + options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1 +ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser \ + -- set Interface vhost-user1 ofport_request=2 +ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser \ + -- set Interface vhost-user2 ofport_request=3 ovs-ofctl del-flows ovsbr0 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 -ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3Launch - first QEMU instance, VM1:taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M q35 / --smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 -enable-kvm / --nographic -realtime mlock=on -kernel /home/root/qemu/bzImage / --drive file=/home/root/qemu/enea-nfv-access-guest-qemux86-64.ext4,/ -if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,/ -size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem / --mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 / --netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce / --device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/ -mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/ -guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 / -hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 / -irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0'Connect to - Target 2 through a new SSH session and run a second QEMU instance - (to get its own console, separate from instance VM1). We shall call - this VM2:taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no / --M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 / --enable-kvm -nographic -realtime mlock=on -kernel /home/root/qemu2/bzImage / --drive file=/home/root/qemu2/enea-nfv-access-guest-qemux86-64.ext4,/ -if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,size=2048M,/ -mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc / --chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 / --netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce / --device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,/ -mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/ -guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 / -hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 / -irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0'Configure DPDK - inside VM1:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:00:02.0Run testpmd inside - VM1:testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 / --- --burst 64 --disable-hw-vlan --disable-rss -i / ---portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 / ---txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedStart - testpmd inside VM1:startConfigure - DPDK inside VM2:mkdir -p /mnt/huge -mount -t hugetlbfs nodev /mnt/huge -modprobe igb_uio -dpdk-devbind --bind=igb_uio 0000:00:02.0Run testpmd inside - VM2:testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 / --- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 / ---coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 / ---rxd=512 --txqflags=0xf00 --port-topology=chainedSet VM2 for +ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3Create an + XML with the content below and then run virsh create + <XML_FILE><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> + <name>nfv-ovs-vm1</name> + <uuid>ed204646-1ad5-11e7-93ae-92361f002671</uuid> + <memory unit='KiB'>4194304</memory> + <currentMemory unit='KiB'>4194304</currentMemory> + + <memoryBacking> + <hugepages> + <page size='512' unit='M' nodeset='0'/> + </hugepages> + </memoryBacking> + + <os> + <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> + <kernel>Image</kernel> + <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> + <boot dev='hd'/> + </os> + + <features> + <acpi/> + <apic/> + </features> + + <vcpu placement='static'>2</vcpu> + + <cpu mode='host-model'> + <model fallback='allow'/> + <topology sockets='1' cores='2' threads='1'/> + <numa> + <cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/> + </numa> + </cpu> + + <cputune> + <vcpupin vcpu="0" cpuset="4"/> + <vcpupin vcpu="1" cpuset="5"/> + </cputune> + + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + + <devices> + <emulator>/usr/bin/qemu-system-aarch64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source file='enea-nfv-access-guest-qemuarm64.ext4'/> + <target dev='vda' bus='virtio'/> + </disk> + + <serial type='pty'> + <target port='0'/> + </serial> + + <console type='pty'> + <target type='serial' port='0'/> + </console> + </devices> + + <qemu:commandline> + <qemu:arg value='-chardev'/> + <qemu:arg value='socket,id=charnet0,path=/var/run/openvswitch/vhost-user1'/> + + <qemu:arg value='-netdev'/> + <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> + + <qemu:arg value='-device'/> + <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> + </qemu:commandline> +</domain> + + + Connect to the first virtual machines console, by + running: + + virsh console nfv-ovs-vm1 + + The first virtual machine shall be called VM1. + + Connect to Target 2 through a new SSH session and run launch a + second VM by creating another XML file and running virsh + create <XML_FILE2><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> + <name>nfv-ovs-vm2</name> + <uuid>ed204646-1ad5-11e7-93ae-92361f002623</uuid> + <memory unit='KiB'>4194304</memory> + <currentMemory unit='KiB'>4194304</currentMemory> + + <memoryBacking> + <hugepages> + <page size='512' unit='M' nodeset='0'/> + </hugepages> + </memoryBacking> + + <os> + <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> + <kernel>Image</kernel> + <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> + <boot dev='hd'/> + </os> + + <features> + <acpi/> + <apic/> + </features> + + <vcpu placement='static'>2</vcpu> + + <cpu mode='host-model'> + <model fallback='allow'/> + <topology sockets='1' cores='2' threads='1'/> + <numa> + <cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/> + </numa> + </cpu> + + <cputune> + <vcpupin vcpu="0" cpuset="6"/> + <vcpupin vcpu="1" cpuset="7"/> + </cputune> + + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + + <devices> + <emulator>/usr/bin/qemu-system-aarch64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw' cache='none'/> + <source file='enea-nfv-access-guest-qemuarm64.ext4'/> + <target dev='vda' bus='virtio'/> + </disk> + + <serial type='pty'> + <target port='0'/> + </serial> + + <console type='pty'> + <target type='serial' port='0'/> + </console> + </devices> + + <qemu:commandline> + <qemu:arg value='-chardev'/> + <qemu:arg value='socket,id=charnet1,path=/var/run/openvswitch/vhost-user2'/> + + <qemu:arg value='-netdev'/> + <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet1'/> + + <qemu:arg value='-device'/> + <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:02,bus=pcie.0,addr=0x2'/> + </qemu:commandline> +</domain> + + + Connect to the second virtual machines console, by + running: + + virsh console nfv-ovs-vm2 + + The second virtual machine shall be called VM2. + + Configure DPDK inside VM1:ifconfig enp0s2 down +echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode +modprobe vfio-pci +dpdk-devbind -b vfio-pci 0000:00:02.0Run testpmd inside + VM1:testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \ + -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \ + --no-flush-rx --port-topology=chainedStart testpmd inside + VM1:startConfigure DPDK inside + VM2:ifconfig enp0s2 down +echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode +modprobe vfio-pci +dpdk-devbind -b vfio-pci 0000:00:02.0Run testpmd inside + VM2:testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \ + -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \ + --no-flush-rx --port-topology=chainedSet VM2 for termination and start testpmd:set fwd rxonly startOn target 1, start pktgen traffic:start 0Use this command to refresh testpmd display in VM1 and VM2 and note the @@ -1326,312 +1389,88 @@ show port stats 0For VM1, we record the stats relevant for 64 - 14877757 + 14692306 - 7712835 + 1986888 - 6024320 + 1278884 - 6015525 + 1278792 - 9997 + 9870 - 40.0 + 8.70% 128 - 8441333 + 8445997 - 7257432 + 1910675 - 5717540 + 1205371 - 5716752 + 1205371 - 9994 + 10000 - 67.7 + 14.27% 256 - 4528865 + 4529126 - 4528717 + 1723468 - 4528717 + 1080976 - 4528621 + 1080977 - 9999 + 10000 - 99.9 + 23.87% - - -
- - - -
- SR-IOV in Virtual Machines - - PCI passthrough tests using pktgen and testpmd in virtual - machines. - pktgen[DPDK]VM - PHY - VM[DPDK] testpmd. - - Measurements: - - - - pktgen to testpmd in forwarding mode. - - - - pktgen to testpmd in termination mode. - - - -
- Test Setup - - SSD boot using the following grub.cfg - entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / -isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / -clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / -intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / -hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Stop - other services and mount hugepages: systemctl stop openvswitch -mkdir -p /mnt/huge -mount -t hugetlbfs hugetlbfs /mnt/hugeConfigure SR-IOV - interfaces:/usr/share/usertools/dpdk-devbind.py --bind=ixgbe 0000:03:00.0 -echo 2 > /sys/class/net/eno3/device/sriov_numvfs -ifconfig eno3 10.0.0.1 -modprobe vfio_pci -/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.0 -/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.2 -ip link set eno3 vf 0 mac 0c:c4:7a:E5:0F:48 -ip link set eno3 vf 1 mac 0c:c4:7a:BF:52:E7Launch two QEMU - instances: taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M / -q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 -enable-kvm / --nographic -kernel /mnt/qemu/bzImage / --drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,if=virtio,/ -format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/ -share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.0 / --append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 / -isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll / -intel_pstate=disable intel_idle.max_cstate=0 / -processor.max_cstate=0 mce=ignore_ce audit=0' - - -taskset -c 2,3 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M / -q35 -smp cores=2,sockets=1 -vcpu 0,affinity=2 -vcpu 1,affinity=3 -enable-kvm / --nographic -kernel /mnt/qemu/bzImage / --drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,if=virtio,/ -format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/ -share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.2 / --append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 / -isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll / -intel_pstate=disable intel_idle.max_cstate=0 processor.max_cstate=0 / -mce=ignore_ce audit=0'In the first VM, mount hugepages and - start pktgen:mkdir -p /mnt/huge && \ -mount -t hugetlbfs hugetlbfs /mnt/huge -modprobe igb_uio -/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0 -cd /usr/share/apps/pktgen -./pktgen -c 0x3 -- -P -m "1.0"In the pktgen console set the - MAC of the destination and start generating - packages:set mac 0 0C:C4:7A:BF:52:E7 -strIn the second VM, mount hugepages and start - testpmd:mkdir -p /mnt/huge && \ -mount -t hugetlbfs hugetlbfs /mnt/huge -modprobe igb_uio -/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0 -testpmd -c 0x3 -n 2 -- -i --txd=512 --rxd=512 --port-topology=chained / ---eth-peer=0,0c:c4:7a:e5:0f:48In order to enable forwarding mode, in the testpmd console, - run:set fwd mac -startIn order to enable termination mode, in the testpmd console, - run:set fwd rxonly -start - Results in forwarding mode - - - - - Bytes - - VM1 pktgen pps - TX - - VM1 pktgen pps - RX - - VM2 testpmd - pps RX - - VM2 testpmd - pps TX - - - - 64 - - 7102096 - - 7101897 - - 7103853 - - 7103793 - - - - 128 - - 5720016 - - 5720256 - - 5722081 - - 5722083 - - - - 256 - - 3456619 - - 3456164 - - 3456319 - - 3456321 - - - - 512 - - 1846671 - - 1846628 - - 1846652 - - 1846657 - - - - 1024 - - 940799 - - 940748 - - 940788 - - 940788 - - - - 1500 - - 649594 - - 649526 - - 649563 - - 649563 - - - -
- Results in termination mode - - - - - Bytes - - VM1 pktgen pps - TX - - VM2 testpmd - RX - - - - 64 - - 14202904 - - 14203944 - - - - 128 - - 8434766 + + 512 - 8437525 - + 2349638 - - 256 + 1559367 - 4532131 + 972923 - 4532348 - + 972921 - - 512 + 10000 - 2349344 + 41.41% + - 2349032 - + + 1024 - - 1024 + 1197322 - 1197293 + 1197318 - 1196699 - + 839508 - - 1500 + 839508 - 822321 + 10000 - 822276 - - - -
+ 70.12% + + + +
- \ No newline at end of file + -- cgit v1.2.3-54-g00ecf