Benchmarks
Hardware Setup
The following table describes all the needed prequisites for an apt
hardware setup:
Hardware Setup
Item
Description
Server Platform
Supermicro X10SDV-4C-TLN2F
http://www.supermicro.com/products/motherboard/xeon/d/X10SDV-4C-TLN2F.cfm
ARCH
x86-64
Processor
1 x Intel Xeon D-1521 (Broadwell), 4 cores, 8
hyper-threaded cores per processor
CPU freq
2.40 GHz
RAM
16GB
Network
Dual integrated 10G ports
Storage
Samsung 850 Pro 128GB SSD
Generic tests configuration:
All tests use one port, one core and one Rx/TX queue for fast
path traffic.
BIOS Settings
The table below details the BIOS settings for which the default
values were changed when doing performance measurements.
BIOS Settings
Menu Path
Setting Name
Enea NFV Access value
BIOS Default value
CPU Configuration
Direct Cache Access (DCA)
Enable
Auto
CPU Configuration / Advanced Power Management
Configuration
EIST (P-States)
Disable
Enable
CPU Configuration / Advanced Power Management Configuration
/ CPU C State Control
CPU C State
Disable
Enable
CPU Configuration / Advanced Power Management Configuration
/ CPU Advanced PM Turning / Energy Perf BIAS
Energy Performance Tuning
Disable
Enable
CPU Configuration / Advanced Power Management Configuration
/ CPU Advanced PM Turning / Energy Perf BIAS
Energy Performance BIAS Setting
Performance
Balanced Performance
CPU Configuration / Advanced Power Management Configuration
/ CPU Advanced PM Turning / Energy Perf BIAS
Power/Performance Switch
Disable
Enable
CPU Configuration / Advanced Power Management Configuration
/ CPU Advanced PM Turning / Program PowerCTL _MSR
Energy Efficient Turbo
Disable
Enable
Chipset Configuration / North Bridge / IIO
Configuration
EV DFX Features
Enable
Disable
Chipset Configuration / North Bridge / Memory
Configuration
Enforce POR
Disable
Enable
Chipset Configuration / North Bridge / Memory
Configuration
Memory Frequency
2400
Auto
Chipset Configuration / North Bridge / Memory
Configuration
DRAM RAPL Baseline
Disable
DRAM RAPL Mode 1
Use Cases
Docker related benchmarks
Forward traffic in Docker
Benchmarking traffic forwarding using testpmd in a Docker
container.
Pktgen is used to generate UDP traffic that will reach testpmd,
running in a Docker image. It will then be forwarded back to source on
the return trip (Forwarding).
This test measures:
pktgen TX, RX in packets per second (pps) and Mbps
testpmd TX, RX in packets per second (pps)
divide testpmd RX / pktgen TX in pps to obtain throughput in
percentages (%)
Test Setup for Target 1
Start by following the steps below:
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1
Kill unnecessary services:killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchMount hugepages and configure
DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Run
pktgen:cd /usr/share/apps/pktgen/
./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"In the pktgen console
run:strTo change framesize for
pktgen, from [64, 128, 256, 512]:set 0 size <number>
Test Setup for Target 2
Start by following the steps below:
SSD boot using the following grub.cfg
entry:
linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1
It is expected to have Docker/guest image on target. Configure
the OVS bridge:# OVS old config clean-up
killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitch
# Mount hugepages and bind interfaces to dpdk
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0
# configure openvswitch with DPDK
export DB_SOCK=/var/run/openvswitch/db.sock
ovsdb-tool create /etc/openvswitch/conf.db /
/usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK /
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
--log-file=/var/log/openvswitch/ovs-vswitchd.log
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 vhost-user1 /
-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface /
dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=2
# configure static flows
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Import a
Docker container:docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guestStart
the Docker container:docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ /
-v /mnt/huge:/mnt/huge el7_guest /bin/bashStart the testpmd
application in Docker:testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedTo
start traffic forwarding, run the
following command in testpmd CLI:startTo
start traffic but in termination
mode (no traffic sent on TX), run following command in testpmd
CLI:set fwd rxonly
start
Results in forwarding mode
Bytes
pktgen pps
TX
pktgen MBits/s
TX
pktgen pps
RX
pktgen MBits/s
RX
testpmd pps
RX
testpmd pps
TX
throughput
(%)
64
14890993
10006
7706039
5178
7692807
7692864
51.74%
128
8435104
9999
7689458
9060
7684962
7684904
90.6%
256
4532384
9999
4532386
9998
4532403
4532403
99.9%
Results in termination mode
Bytes
pktgen pps
TX
testpmd pps
RX
throughput
(%)
64
14890993
7330403
49,2%
128
8435104
7330379
86,9%
256
4532484
4532407
99,9%
Forward traffic from Docker to another Docker on the same
host
Benchmark a combo test using testpmd running in two Docker
instances, one which Forwards traffic to the second one, which
Terminates it.
Packets are generated with pktgen and TX-d to the first testpmd,
which will RX and Forward them to the second testpmd, which will RX
and terminate them.
Measurements are made in:
pktgen TX in pps and Mbits/s
testpmd TX and RX pps in Docker1
testpmd RX pps in Docker2
Throughput found as a percent, by dividing Docker2 testpmd RX pps by pktgen
TX pps.
Test Setup for Target 1
Start by following the steps below:
SSD boot using the following grub.cfg
entry:
linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1
Configure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Run
pktgen:cd /usr/share/apps/pktgen/
./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"Choose one of the
values from [64, 128, 256, 512] to change the packet
size:set 0 size <number>
Test Setup for Target 2
Start by following the steps below:
SSD boot using the following grub.cfg
entry:
linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 /
iommu=pt intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1
killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Configure the OVS
bridge:export DB_SOCK=/var/run/openvswitch/db.sock
ovsdb-tool create /etc/openvswitch/conf.db /
/usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK /
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xcc
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
--log-file=/var/log/openvswitch/ovs-vswitchd.log
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface /
vhost-user1 type=dpdkvhostuser ofport_request=1
ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface /
vhost-user2 type=dpdkvhostuser ofport_request=2
ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=3
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2
ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Import a
Docker container:docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guestStart
the first Docker:docker run -it --rm --cpuset-cpus=4,5 /
-v /var/run/openvswitch/:/var/run/openvswitch/ /
-v /mnt/huge:/mnt/huge el7_guest /bin/bashStart the testpmd
application in Docker1:testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedConfigure
it in termination mode:set fwd rxonlyRun
the testpmd application:startOpen a
new console to the host and start the second Docker
instance:docker run -it --rm --cpuset-cpus=0,1 -v /var/run/openvswitch/:/var/run/openvswitch/ /
-v /mnt/huge:/mnt/huge el7_guest /bin/bashIn the second
container start testpmd:testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci /
--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
-d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlanRun
the TestPmd application in the second Docker:testpmd -c 0x3 -n 2 --file-prefix prog2 --socket-mem 512 --no-pci /
--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
--disable-rss -i --portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedIn
the testpmd shell, run:startStart
pktgen traffic by running the following command in pktgen
CLI:start 0To record traffic
results:show port stats 0This
should be used in testpmd applications.
Results
Bytes
Target 1 -
pktgen pps TX
Target 2 -
(forwarding) testpmd pps RX
Target 2 -
(forwarding) testpmd pps TX
Target 2 -
(termination) testpmd pps RX
64
14844628
5643565
3459922
3457326
128
8496962
5667860
3436811
3438918
256
4532372
4532362
3456623
3457115
512
2367641
2349450
2349450
2349446
SR-IOV in in Docker
PCI passthrough tests using pktgen and testpmd in Docker.
pktgen[DPDK]Docker - PHY - Docker[DPDK] testpmd
Measurements:
RX packets per second in testpmd (with testpmd configured in
rxonly mode).
Test Setup
Boot Enea NFV Access from SSD:linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable clocksource=tsc /
tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 processor.max_cstate=0 /
mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt intel_iommu=on hugepagesz=1GB /
hugepages=8 default_hugepagesz=1GB hugepagesz=2M hugepages=2048 /
vfio_iommu_type1.allow_unsafe_interrupts=1lAllow unsafe
interrupts:echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interruptsConfigure
DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
dpdk-devbind.py --bind=ixgbe 0000:03:00.0
ifconfig eno3 192.168.1.2
echo 2 > /sys/class/net/eno3/device/sriov_numvfs
modprobe vfio-pci
dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
dpdk-devbind.py --bind=vfio-pci 0000:03:10.2Start two docker
containers:docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
--device /dev/vfio/vfio el7_guest /bin/bash
docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
--device /dev/vfio/vfio el7_guest /bin/bashIn the first
container start pktgen:cd /usr/share/apps/pktgen/
./pktgen -c 0x1f -w 0000:03:10.0 -n 1 --file-prefix pg1 /
--socket-mem 1024 -- -P -m "[3:4].0"In the pktgen prompt set
the destination MAC address:set mac 0 XX:XX:XX:XX:XX:XX
strIn the second container start testpmd:testpmd -c 0x7 -n 1 -w 0000:03:10.2 -- -i --portmask=0x1 /
--txd=256 --rxd=256 --port-topology=chainedIn the testpmd
prompt set forwarding
rxonly:set fwd rxonly
start
Results
Bytes
pktgen pps
TX
testpmd pps
RX
pktgen MBits/s
TX
throughput
(%)
64
14525286
14190869
9739
97.7
128
8456960
8412172
10013
99.4
256
4566624
4526587
10083
99.1
512
2363744
2348015
10060
99.3
VM related benchmarks
Forward/termination traffic in one VM
Benchmarking traffic (UDP) forwarding and termination using
testpmd in a virtual machine.
The Pktgen application is used to generate traffic that will
reach testpmd running on a virtual machine, and be forwarded back to
source on the return trip. With the same setup a second measurement
will be done with traffic termination in the virtual machine.
This test case measures:
pktgen TX, RX in packets per second (pps) and Mbps
testpmd TX, RX in packets per second (pps)
divide testpmd RX by
pktgen TX in pps to obtain the
throughput in percentages (%)
Test Setup for Target 1
Start with the steps below:
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1
Kill unnecessary services: killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Run
pktgen:cd /usr/share/apps/pktgen/
./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
-w 0000:03:00.0 -- -P -m "[1:2].0"Set pktgen frame size to
use from [64, 128, 256, 512]:set 0 size 64
Test Setup for Target 2
Start by following the steps below:
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill
unnecessary services: killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Configure
OVS:export DB_SOCK=/var/run/openvswitch/db.sock
ovsdb-tool create /etc/openvswitch/conf.db /
/usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK /
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
--log-file=/var/log/openvswitch/ovs-vswitchd.log
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 vhost-user1 /
-- set Interface vhost-user1 type=dpdkvhostuser -- set Interface /
vhost-user1 ofport_request=2
ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
type=dpdk options:dpdk-devargs=0000:03:00.0 /
-- set Interface dpdk0 ofport_request=1
chmod 777 /var/run/openvswitch/vhost-user1
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1Launch
QEMU:taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 /
-enable-kvm -nographic -realtime mlock=on -kernel /mnt/qemu/bzImage /
-drive file=/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
if=virtio,format=raw -m 4096 -object memory-backend-file,id=mem,/
size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
hugepagesz=2M hugepages=1024 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0'Inside QEMU,
configure DPDK: mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:00:02.0Inside QEMU, run
testpmd: testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 --rxd=512 /
--txqflags=0xf00 --port-topology=chainedFor the Forwarding test, start testpmd
directly:startFor the Termination test, set testpmd to only
receive, then start it:set fwd rxonly
startOn target 1, you may start pktgen traffic
now:start 0On target 2, use this
command to refresh the testpmd display and note the highest
values:show port stats 0To stop
traffic from pktgen, in order to choose a different frame
size:stop 0To clear numbers in
testpmd:clear port stats
show port stats 0
Results in forwarding mode
Bytes
pktgen pps
RX
pktgen pps
TX
testpmd pps
RX
testpmd pps
TX
pktgen MBits/s
RX
pktgen MBits/s
TX
throughput
(%)
64
7755769
14858714
7755447
7755447
5207
9984
52.2
128
7714626
8435184
7520349
6932520
8204
9986
82.1
256
4528847
4528854
4529030
4529034
9999
9999
99.9
Results in termination mode
Bytes
pktgen pps
TX
testpmd pps
RX
pktgen MBits/s
TX
throughput
(%)
64
15138992
7290663
10063
48.2
128
8426825
6902646
9977
81.9
256
4528957
4528912
9999
100
Forward traffic between two VMs
Benchmark a combo test using two virtual machines, the first
with traffic forwarding to the second, which terminates it.
Measurements are made in:
pktgen TX in pps and Mbits/s
testpmd TX and RX pps in VM1
testpmd RX pps in VM2
throughput in percents, by dividing
VM2 testpmd RX pps by pktgen TX
pps
Test Setup for Target 1
Start by doing the following:
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill
Services:killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Run
pktgen:cd /usr/share/apps/pktgen/
./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
-w 0000:03:00.0 -- -P -m "[1:2].0"Set pktgen frame size to
use from [64, 128, 256, 512]:set 0 size 64
Test Setup for Target 2
Start by doing the following:
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Kill
Services:killall ovsdb-server ovs-vswitchd
rm -rf /etc/openvswitch/*
mkdir -p /var/run/openvswitchConfigure DPDK:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:03:00.0Configure
OVS:export DB_SOCK=/var/run/openvswitch/db.sock
ovsdb-tool create /etc/openvswitch/conf.db /
/usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK /
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vswitchd unix:$DB_SOCK --pidfile /
--detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 dpdk0 /
-- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1
ovs-vsctl add-port ovsbr0 vhost-user1 /
-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=2
ovs-vsctl add-port ovsbr0 vhost-user2 /
-- set Interface vhost-user2 type=dpdkvhostuser ofport_request=3
ovs-ofctl del-flows ovsbr0
ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3Launch
first QEMU instance, VM1:taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M q35 /
-smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 -enable-kvm /
-nographic -realtime mlock=on -kernel /home/root/qemu/bzImage /
-drive file=/home/root/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,/
size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0'Connect to
Target 2 through a new SSH session and run a second QEMU instance
(to get its own console, separate from instance VM1). We shall call
this VM2:taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 /
-enable-kvm -nographic -realtime mlock=on -kernel /home/root/qemu2/bzImage /
-drive file=/home/root/qemu2/enea-image-virtualization-guest-qemux86-64.ext4,/
if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,size=2048M,/
mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc /
-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 /
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce /
-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,/
mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0'Configure DPDK
inside VM1:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:00:02.0Run testpmd inside
VM1:testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
-- --burst 64 --disable-hw-vlan --disable-rss -i /
--portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chainedStart
testpmd inside VM1:startConfigure
DPDK inside VM2:mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe igb_uio
dpdk-devbind --bind=igb_uio 0000:00:02.0Run testpmd inside
VM2:testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 /
--rxd=512 --txqflags=0xf00 --port-topology=chainedSet VM2 for
termination and start testpmd:set fwd rxonly
startOn target 1, start pktgen traffic:start 0Use
this command to refresh testpmd display in VM1 and VM2 and note the
highest values:show port stats 0To
stop traffic from pktgen, in order to choose a different frame
size:stop 0To clear numbers in
testpmd:clear port stats
show port stats 0For VM1, we record the stats relevant for
forwarding:
RX, TX in pps
Only Rx-pps and Tx-pps numbers are important here, they change
every time stats are displayed as long as there is traffic. Run the
command a few times and pick the best (maximum) values seen.
For VM2, we record the stats relevant for termination:
RX in pps (TX will be 0)
For pktgen, we record only the TX side, because flow is
terminated, with no RX traffic reaching pktgen:
TX in pps and Mbit/s
Results in forwarding mode
Bytes
pktgen pps
TX
VM1 testpmd pps
RX
VM1 testpmd pps
TX
VM2 testpmd pps
RX
pktgen MBits/s
TX
throughput
(%)
64
14845113
6826540
5389680
5383577
9975
36.2
128
8426683
6825857
5386971
5384530
9976
63.9
256
4528894
4507975
4507958
4532457
9999
100
SR-IOV in Virtual Machines
PCI passthrough tests using pktgen and testpmd in virtual
machines.
pktgen[DPDK]VM - PHY - VM[DPDK] testpmd.
Measurements:
pktgen to testpmd in forwarding mode.
pktgen to testpmd in termination mode.
Test Setup
SSD boot using the following grub.cfg
entry: linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1Stop
other services and mount hugepages: systemctl stop openvswitch
mkdir -p /mnt/huge
mount -t hugetlbfs hugetlbfs /mnt/hugeConfigure SR-IOV
interfaces:/usr/share/usertools/dpdk-devbind.py --bind=ixgbe 0000:03:00.0
echo 2 > /sys/class/net/eno3/device/sriov_numvfs
ifconfig eno3 10.0.0.1
modprobe vfio_pci
/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.2
ip link set eno3 vf 0 mac 0c:c4:7a:E5:0F:48
ip link set eno3 vf 1 mac 0c:c4:7a:BF:52:E7Launch two QEMU
instances: taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 -enable-kvm /
-nographic -kernel /mnt/qemu/bzImage /
-drive file=/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4,if=virtio,/
format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.0 /
-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
intel_pstate=disable intel_idle.max_cstate=0 /
processor.max_cstate=0 mce=ignore_ce audit=0'
taskset -c 2,3 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
q35 -smp cores=2,sockets=1 -vcpu 0,affinity=2 -vcpu 1,affinity=3 -enable-kvm /
-nographic -kernel /mnt/qemu/bzImage /
-drive file=/mnt/qemu/enea-image2-virtualization-guest-qemux86-64.ext4,if=virtio,/
format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.2 /
-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
intel_pstate=disable intel_idle.max_cstate=0 processor.max_cstate=0 /
mce=ignore_ce audit=0'In the first VM, mount hugepages and
start pktgen:mkdir -p /mnt/huge && \
mount -t hugetlbfs hugetlbfs /mnt/huge
modprobe igb_uio
/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
cd /usr/share/apps/pktgen
./pktgen -c 0x3 -- -P -m "1.0"In the pktgen console set the
MAC of the destination and start generating
packages:set mac 0 0C:C4:7A:BF:52:E7
strIn the second VM, mount hugepages and start
testpmd:mkdir -p /mnt/huge && \
mount -t hugetlbfs hugetlbfs /mnt/huge
modprobe igb_uio
/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
testpmd -c 0x3 -n 2 -- -i --txd=512 --rxd=512 --port-topology=chained /
--eth-peer=0,0c:c4:7a:e5:0f:48In order to enable forwarding mode, in the testpmd console,
run:set fwd mac
startIn order to enable termination mode, in the testpmd console,
run:set fwd rxonly
start
Results in forwarding mode
Bytes
VM1 pktgen pps
TX
VM1 pktgen pps
RX
VM2 testpmd
pps RX
VM2 testpmd
pps RX
64
7105645
7103976
7101487
7101487
128
5722795
5722252
5704219
5704219
256
3454075
3455144
3452020
3452020
512
1847751
1847751
1847751
1847751
1024
956214
956214
956214
956214
1500
797174
797174
797174
797174
Results in termination mode
Bytes
VM1 pktgen pps
TX
VM2 testpmd
RX
64
14204580
14205063
128
8424611
8424611
256
4529024
4529024
512
2348640
2348640
1024
1197101
1197101
1500
822244
822244