summaryrefslogtreecommitdiffstats
path: root/doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml')
-rw-r--r--doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml1488
1 files changed, 0 insertions, 1488 deletions
diff --git a/doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml b/doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml
deleted file mode 100644
index 4063515..0000000
--- a/doc/book-enea-nfv-access-reference-guide-intel/doc/benchmarks.xml
+++ /dev/null
@@ -1,1488 +0,0 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter condition="hidden" id="benchmarks">
5 <title>Benchmarks</title>
6
7 <section id="hw-setup">
8 <title>Hardware Setup</title>
9
10 <para>The following table describes all the needed prequisites for an apt
11 hardware setup:</para>
12
13 <table>
14 <title>Hardware Setup</title>
15
16 <tgroup cols="2">
17 <colspec align="left" />
18
19 <thead>
20 <row>
21 <entry align="center">Item</entry>
22
23 <entry align="center">Description</entry>
24 </row>
25 </thead>
26
27 <tbody>
28 <row>
29 <entry align="left">Server Platform</entry>
30
31 <entry align="left">Supermicro X10SDV-4C-TLN2F
32 http://www.supermicro.com/products/motherboard/xeon/d/X10SDV-4C-TLN2F.cfm</entry>
33 </row>
34
35 <row>
36 <entry align="left">ARCH</entry>
37
38 <entry>x86-64</entry>
39 </row>
40
41 <row>
42 <entry align="left">Processor</entry>
43
44 <entry>1 x Intel Xeon D-1521 (Broadwell), 4 cores, 8
45 hyper-threaded cores per processor</entry>
46 </row>
47
48 <row>
49 <entry align="left">CPU freq</entry>
50
51 <entry>2.40 GHz</entry>
52 </row>
53
54 <row>
55 <entry align="left">RAM</entry>
56
57 <entry>16GB</entry>
58 </row>
59
60 <row>
61 <entry align="left">Network</entry>
62
63 <entry>Dual integrated 10G ports</entry>
64 </row>
65
66 <row>
67 <entry align="left">Storage</entry>
68
69 <entry>Samsung 850 Pro 128GB SSD</entry>
70 </row>
71 </tbody>
72 </tgroup>
73 </table>
74
75 <para>Generic tests configuration:</para>
76
77 <itemizedlist>
78 <listitem>
79 <para>All tests use one port, one core and one Rx/TX queue for fast
80 path traffic.</para>
81 </listitem>
82 </itemizedlist>
83 </section>
84
85 <section condition="hidden" id="use-cases">
86 <title>Use Cases</title>
87
88 <section id="docker-benchmarks">
89 <title>Docker related benchmarks</title>
90
91 <section id="fwd_traffic_dock">
92 <title>Forward traffic in Docker</title>
93
94 <para>Benchmarking traffic forwarding using testpmd in a Docker
95 container.</para>
96
97 <para>Pktgen is used to generate UDP traffic that will reach testpmd,
98 running in a Docker image. It will then be forwarded back to source on
99 the return trip (<emphasis role="bold">Forwarding</emphasis>).</para>
100
101 <para>This test measures:</para>
102
103 <itemizedlist>
104 <listitem>
105 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
106 </listitem>
107
108 <listitem>
109 <para>testpmd TX, RX in packets per second (pps)</para>
110 </listitem>
111
112 <listitem>
113 <para>divide testpmd RX / pktgen TX in pps to obtain throughput in
114 percentages (%)</para>
115 </listitem>
116 </itemizedlist>
117
118 <section id="usecase-one">
119 <title>Test Setup for Target 1</title>
120
121 <para>Start by following the steps below:</para>
122
123 <para>SSD boot using the following <literal>grub.cfg</literal>
124 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
125isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
126clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
127processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
128intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
129hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
130
131 <para>Kill unnecessary services:<programlisting>killall ovsdb-server ovs-vswitchd
132rm -rf /etc/openvswitch/*
133mkdir -p /var/run/openvswitch</programlisting>Mount hugepages and configure
134 DPDK:<programlisting>mkdir -p /mnt/huge
135mount -t hugetlbfs nodev /mnt/huge
136modprobe igb_uio
137dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
138 pktgen:<programlisting>cd /usr/share/apps/pktgen/
139./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>In the pktgen console
140 run:<programlisting>str</programlisting>To change framesize for
141 pktgen, from [64, 128, 256, 512]:<programlisting>set 0 size &amp;lt;number&amp;gt;</programlisting></para>
142 </section>
143
144 <section id="usecase-two">
145 <title>Test Setup for Target 2</title>
146
147 <para>Start by following the steps below:</para>
148
149 <para>SSD boot using the following <literal>grub.cfg</literal>
150 entry:</para>
151
152 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
153isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
154clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
155processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
156intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
157hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
158
159 <para>It is expected to have Docker/guest image on target. Configure
160 the OVS bridge:<programlisting># OVS old config clean-up
161killall ovsdb-server ovs-vswitchd
162rm -rf /etc/openvswitch/*
163mkdir -p /var/run/openvswitch
164
165# Mount hugepages and bind interfaces to dpdk
166mkdir -p /mnt/huge
167mount -t hugetlbfs nodev /mnt/huge
168modprobe igb_uio
169dpdk-devbind --bind=igb_uio 0000:03:00.0
170
171# configure openvswitch with DPDK
172export DB_SOCK=/var/run/openvswitch/db.sock
173ovsdb-tool create /etc/openvswitch/conf.db /
174/usr/share/openvswitch/vswitch.ovsschema
175ovsdb-server --remote=punix:$DB_SOCK /
176--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
177ovs-vsctl --no-wait init
178ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
179ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
180ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
181ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
182ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
183--log-file=/var/log/openvswitch/ovs-vswitchd.log
184
185ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
186ovs-vsctl add-port ovsbr0 vhost-user1 /
187-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
188ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface /
189dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=2
190
191# configure static flows
192ovs-ofctl del-flows ovsbr0
193ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
194ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
195 Docker container:<programlisting>docker import enea-nfv-access-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
196 the Docker container:<programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ /
197-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
198 application in Docker:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
199--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
200/usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 /
201--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
202--rxq=1 --txq=1 --txd=512 --rxd=512 --port-topology=chained</programlisting>To
203 start traffic <emphasis role="bold">forwarding</emphasis>, run the
204 following command in testpmd CLI:<programlisting>start</programlisting>To
205 start traffic but in <emphasis role="bold">termination</emphasis>
206 mode (no traffic sent on TX), run following command in testpmd
207 CLI:<programlisting>set fwd rxonly
208start</programlisting><table>
209 <title>Results in forwarding mode</title>
210
211 <tgroup cols="8">
212 <tbody>
213 <row>
214 <entry align="center"><emphasis
215 role="bold">Bytes</emphasis></entry>
216
217 <entry align="center"><emphasis role="bold">pktgen pps
218 TX</emphasis></entry>
219
220 <entry align="center"><emphasis role="bold">pktgen MBits/s
221 TX</emphasis></entry>
222
223 <entry align="center"><emphasis role="bold">pktgen pps
224 RX</emphasis></entry>
225
226 <entry align="center"><emphasis role="bold">pktgen MBits/s
227 RX</emphasis></entry>
228
229 <entry align="center"><emphasis role="bold">testpmd pps
230 RX</emphasis></entry>
231
232 <entry align="center"><emphasis role="bold">testpmd pps
233 TX</emphasis></entry>
234
235 <entry align="center"><emphasis role="bold">throughput
236 (%)</emphasis></entry>
237 </row>
238
239 <row>
240 <entry role="bold"><emphasis
241 role="bold">64</emphasis></entry>
242
243 <entry>14877658</entry>
244
245 <entry>9997</entry>
246
247 <entry>7832352</entry>
248
249 <entry>5264</entry>
250
251 <entry>7831250</entry>
252
253 <entry>7831250</entry>
254
255 <entry>52,65%</entry>
256 </row>
257
258 <row>
259 <entry><emphasis role="bold">128</emphasis></entry>
260
261 <entry>8441305</entry>
262
263 <entry>9994</entry>
264
265 <entry>7533893</entry>
266
267 <entry>8922</entry>
268
269 <entry>7535127</entry>
270
271 <entry>7682007</entry>
272
273 <entry>89,27%</entry>
274 </row>
275
276 <row>
277 <entry role="bold"><emphasis
278 role="bold">256</emphasis></entry>
279
280 <entry>4528831</entry>
281
282 <entry>9999</entry>
283
284 <entry>4528845</entry>
285
286 <entry>9999</entry>
287
288 <entry>4528738</entry>
289
290 <entry>4528738</entry>
291
292 <entry>100%</entry>
293 </row>
294 </tbody>
295 </tgroup>
296 </table><table>
297 <title>Results in termination mode</title>
298
299 <tgroup cols="4">
300 <tbody>
301 <row>
302 <entry align="center"><emphasis
303 role="bold">Bytes</emphasis></entry>
304
305 <entry align="center"><emphasis role="bold">pktgen pps
306 TX</emphasis></entry>
307
308 <entry align="center"><emphasis role="bold">testpmd pps
309 RX</emphasis></entry>
310
311 <entry align="center"><emphasis role="bold">throughput
312 (%)</emphasis></entry>
313 </row>
314
315 <row>
316 <entry role="bold"><emphasis
317 role="bold">64</emphasis></entry>
318
319 <entry>14877775</entry>
320
321 <entry>8060974</entry>
322
323 <entry>54,1%</entry>
324 </row>
325
326 <row>
327 <entry><emphasis role="bold">128</emphasis></entry>
328
329 <entry>8441403</entry>
330
331 <entry>8023555</entry>
332
333 <entry>95,0%</entry>
334 </row>
335
336 <row>
337 <entry role="bold"><emphasis
338 role="bold">256</emphasis></entry>
339
340 <entry>4528864</entry>
341
342 <entry>4528840</entry>
343
344 <entry>99,9%</entry>
345 </row>
346 </tbody>
347 </tgroup>
348 </table></para>
349 </section>
350 </section>
351
352 <section id="usecase-three-four">
353 <title>Forward traffic from Docker to another Docker on the same
354 host</title>
355
356 <para>Benchmark a combo test using testpmd running in two Docker
357 instances, one which Forwards traffic to the second one, which
358 Terminates it.</para>
359
360 <para>Packets are generated with pktgen and TX-d to the first testpmd,
361 which will RX and Forward them to the second testpmd, which will RX
362 and terminate them.</para>
363
364 <para>Measurements are made in:</para>
365
366 <itemizedlist>
367 <listitem>
368 <para>pktgen TX in pps and Mbits/s</para>
369 </listitem>
370
371 <listitem>
372 <para>testpmd TX and RX pps in Docker1</para>
373 </listitem>
374
375 <listitem>
376 <para>testpmd RX pps in Docker2</para>
377 </listitem>
378 </itemizedlist>
379
380 <para>Throughput found as a percent, by dividing Docker2 <emphasis
381 role="bold">testpmd RX pps</emphasis> by <emphasis role="bold">pktgen
382 TX pps</emphasis>.</para>
383
384 <section id="target-one-usecase-three">
385 <title>Test Setup for Target 1</title>
386
387 <para>Start by following the steps below:</para>
388
389 <para>SSD boot using the following <literal>grub.cfg</literal>
390 entry:</para>
391
392 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
393isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
394clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
395processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
396intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
397hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
398
399 <para>Configure DPDK:<programlisting>mkdir -p /mnt/huge
400mount -t hugetlbfs nodev /mnt/huge
401modprobe igb_uio
402dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
403 pktgen:<programlisting>cd /usr/share/apps/pktgen/
404./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>Choose one of the
405 values from [64, 128, 256, 512] to change the packet
406 size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
407 </section>
408
409 <section id="target-two-usecase-four">
410 <title>Test Setup for Target 2</title>
411
412 <para>Start by following the steps below:</para>
413
414 <para>SSD boot using the following <literal>grub.cfg</literal>
415 entry:</para>
416
417 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
418isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
419clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
420processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 /
421iommu=pt intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
422hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
423
424 <para><programlisting>killall ovsdb-server ovs-vswitchd
425rm -rf /etc/openvswitch/*
426mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
427mount -t hugetlbfs nodev /mnt/huge
428modprobe igb_uio
429dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure the OVS
430 bridge:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
431ovsdb-tool create /etc/openvswitch/conf.db /
432/usr/share/openvswitch/vswitch.ovsschema
433ovsdb-server --remote=punix:$DB_SOCK /
434--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
435ovs-vsctl --no-wait init
436ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
437ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xcc
438ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
439ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
440ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
441--log-file=/var/log/openvswitch/ovs-vswitchd.log
442ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
443ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface /
444vhost-user1 type=dpdkvhostuser ofport_request=1
445ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface /
446vhost-user2 type=dpdkvhostuser ofport_request=2
447ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
448type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=3
449ovs-ofctl del-flows ovsbr0
450ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2
451ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
452 Docker container:<programlisting>docker import enea-nfv-access-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
453 the first Docker:<programlisting>docker run -it --rm --cpuset-cpus=4,5 /
454-v /var/run/openvswitch/:/var/run/openvswitch/ /
455-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
456 application in Docker1:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
457--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
458/usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 /
459--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
460--rxq=1 --txq=1 --txd=512 --rxd=512 --port-topology=chained</programlisting>Configure
461 it in termination mode:<programlisting>set fwd rxonly</programlisting>Run
462 the testpmd application:<programlisting>start</programlisting>Open a
463 new console to the host and start the second Docker
464 instance:<programlisting>docker run -it --rm --cpuset-cpus=0,1 -v /var/run/openvswitch/:/var/run/openvswitch/ /
465-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>In the second
466 container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci /
467--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
468/usr/lib/librte_pmd_virtio.so.1.1 -- -i</programlisting>Run the TestPmd
469 application in the second Docker:<programlisting>testpmd -c 0x3 -n 2 --file-prefix prog2 --socket-mem 512 --no-pci /
470--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
471/usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 /
472--disable-rss -i --portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
473--txq=1 --txd=512 --rxd=512 --port-topology=chained</programlisting>In the
474 testpmd shell, run:<programlisting>start</programlisting>Start
475 pktgen traffic by running the following command in pktgen
476 CLI:<programlisting>start 0</programlisting>To record traffic
477 results:<programlisting>show port stats 0</programlisting>This
478 should be used in testpmd applications.</para>
479
480 <table>
481 <title>Results</title>
482
483 <tgroup cols="5">
484 <tbody>
485 <row>
486 <entry align="center"><emphasis
487 role="bold">Bytes</emphasis></entry>
488
489 <entry align="center"><emphasis role="bold">Target 1 -
490 pktgen pps TX</emphasis></entry>
491
492 <entry align="center"><emphasis role="bold">Target 2 -
493 (forwarding) testpmd pps RX</emphasis></entry>
494
495 <entry align="center"><emphasis role="bold">Target 2 -
496 (forwarding) testpmd pps TX</emphasis></entry>
497
498 <entry align="center"><emphasis role="bold">Target 2 -
499 (termination) testpmd pps RX</emphasis></entry>
500 </row>
501
502 <row>
503 <entry role="bold"><emphasis
504 role="bold">64</emphasis></entry>
505
506 <entry>14877713</entry>
507
508 <entry>5031270</entry>
509
510 <entry>5031214</entry>
511
512 <entry>5031346</entry>
513 </row>
514
515 <row>
516 <entry><emphasis role="bold">128</emphasis></entry>
517
518 <entry>8441271</entry>
519
520 <entry>4670165</entry>
521
522 <entry>4670165</entry>
523
524 <entry>4670261</entry>
525 </row>
526
527 <row>
528 <entry role="bold"><emphasis
529 role="bold">256</emphasis></entry>
530
531 <entry>4528844</entry>
532
533 <entry>4490268</entry>
534
535 <entry>4490268</entry>
536
537 <entry>4490234</entry>
538 </row>
539
540 <row>
541 <entry><emphasis role="bold">512</emphasis></entry>
542
543 <entry>2349458</entry>
544
545 <entry>2349553</entry>
546
547 <entry>2349553</entry>
548
549 <entry>2349545</entry>
550 </row>
551 </tbody>
552 </tgroup>
553 </table>
554 </section>
555 </section>
556
557 <section id="pxe-config-docker">
558 <title>SR-IOV in in Docker</title>
559
560 <para>PCI passthrough tests using pktgen and testpmd in Docker.</para>
561
562 <para>pktgen[DPDK]Docker - PHY - Docker[DPDK] testpmd</para>
563
564 <para>Measurements:</para>
565
566 <itemizedlist>
567 <listitem>
568 <para>RX packets per second in testpmd (with testpmd configured in
569 rxonly mode).</para>
570 </listitem>
571 </itemizedlist>
572
573 <section id="target-setup">
574 <title>Test Setup</title>
575
576 <para>Boot Enea NFV Access from SSD:<programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
577isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable clocksource=tsc /
578tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 processor.max_cstate=0 /
579mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt intel_iommu=on hugepagesz=1GB /
580hugepages=8 default_hugepagesz=1GB hugepagesz=2M hugepages=2048 /
581vfio_iommu_type1.allow_unsafe_interrupts=1l</programlisting>Allow unsafe
582 interrupts:<programlisting>echo 1 &gt; /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts</programlisting>Configure
583 DPDK:<programlisting>mkdir -p /mnt/huge
584mount -t hugetlbfs nodev /mnt/huge
585dpdk-devbind.py --bind=ixgbe 0000:03:00.0
586ifconfig eno3 192.168.1.2
587echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
588modprobe vfio-pci
589dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
590dpdk-devbind.py --bind=vfio-pci 0000:03:10.2</programlisting>Start two docker
591 containers:<programlisting>docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
592--device /dev/vfio/vfio el7_guest /bin/bash
593docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
594--device /dev/vfio/vfio el7_guest /bin/bash</programlisting>In the first
595 container start pktgen:<programlisting>cd /usr/share/apps/pktgen/
596./pktgen -c 0x1f -w 0000:03:10.0 -n 1 --file-prefix pg1 /
597--socket-mem 1024 -- -P -m "[3:4].0"</programlisting>In the pktgen prompt set
598 the destination MAC address:<programlisting>set mac 0 XX:XX:XX:XX:XX:XX
599str</programlisting>In the second container start testpmd:<programlisting>testpmd -c 0x7 -n 1 -w 0000:03:10.2 -- -i --portmask=0x1 /
600--txd=256 --rxd=256 --port-topology=chained</programlisting>In the testpmd
601 prompt set <emphasis role="bold">forwarding</emphasis>
602 rxonly:<programlisting>set fwd rxonly
603start</programlisting><table>
604 <title>Results</title>
605
606 <tgroup cols="5">
607 <tbody>
608 <row>
609 <entry align="center"><emphasis
610 role="bold">Bytes</emphasis></entry>
611
612 <entry align="center"><emphasis role="bold">pktgen pps
613 TX</emphasis></entry>
614
615 <entry align="center"><emphasis role="bold">testpmd pps
616 RX</emphasis></entry>
617
618 <entry align="center"><emphasis role="bold">pktgen MBits/s
619 TX</emphasis></entry>
620
621 <entry align="center"><emphasis role="bold">throughput
622 (%)</emphasis></entry>
623 </row>
624
625 <row>
626 <entry role="bold"><emphasis
627 role="bold">64</emphasis></entry>
628
629 <entry>14204211</entry>
630
631 <entry>14204561</entry>
632
633 <entry>9545</entry>
634
635 <entry>100</entry>
636 </row>
637
638 <row>
639 <entry><emphasis role="bold">128</emphasis></entry>
640
641 <entry>8440340</entry>
642
643 <entry>8440201</entry>
644
645 <entry>9993</entry>
646
647 <entry>99.9</entry>
648 </row>
649
650 <row>
651 <entry role="bold"><emphasis
652 role="bold">256</emphasis></entry>
653
654 <entry>4533828</entry>
655
656 <entry>4533891</entry>
657
658 <entry>10010</entry>
659
660 <entry>100</entry>
661 </row>
662
663 <row>
664 <entry><emphasis role="bold">512</emphasis></entry>
665
666 <entry>2349886</entry>
667
668 <entry>2349715</entry>
669
670 <entry>10000</entry>
671
672 <entry>99.9</entry>
673 </row>
674 </tbody>
675 </tgroup>
676 </table></para>
677 </section>
678 </section>
679 </section>
680
681 <section id="vm-benchmarks">
682 <title>VM related benchmarks</title>
683
684 <section id="usecase-four">
685 <title>Forward/termination traffic in one VM</title>
686
687 <para>Benchmarking traffic (UDP) forwarding and termination using
688 testpmd in a virtual machine.</para>
689
690 <para>The Pktgen application is used to generate traffic that will
691 reach testpmd running on a virtual machine, and be forwarded back to
692 source on the return trip. With the same setup a second measurement
693 will be done with traffic termination in the virtual machine.</para>
694
695 <para>This test case measures:</para>
696
697 <itemizedlist>
698 <listitem>
699 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
700 </listitem>
701
702 <listitem>
703 <para>testpmd TX, RX in packets per second (pps)</para>
704 </listitem>
705
706 <listitem>
707 <para>divide <emphasis role="bold">testpmd RX</emphasis> by
708 <emphasis role="bold">pktgen TX</emphasis> in pps to obtain the
709 throughput in percentages (%)</para>
710 </listitem>
711 </itemizedlist>
712
713 <section id="targetone-usecasefour">
714 <title>Test Setup for Target 1</title>
715
716 <para>Start with the steps below:</para>
717
718 <para>SSD boot using the following <literal>grub.cfg
719 </literal>entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
720isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
721clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
722processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
723intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
724hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
725
726 <para>Kill unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
727rm -rf /etc/openvswitch/*
728mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
729mount -t hugetlbfs nodev /mnt/huge
730modprobe igb_uio
731dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
732 pktgen:<programlisting>cd /usr/share/apps/pktgen/
733./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
734-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
735 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
736 </section>
737
738 <section id="targettwo-usecasefive">
739 <title>Test Setup for Target 2</title>
740
741 <para>Start by following the steps below:</para>
742
743 <para>SSD boot using the following <literal>grub.cfg</literal>
744 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
745isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
746clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
747processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
748intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
749hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
750 unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
751rm -rf /etc/openvswitch/*
752mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
753mount -t hugetlbfs nodev /mnt/huge
754modprobe igb_uio
755dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
756 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
757ovsdb-tool create /etc/openvswitch/conf.db /
758/usr/share/openvswitch/vswitch.ovsschema
759ovsdb-server --remote=punix:$DB_SOCK /
760--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
761ovs-vsctl --no-wait init
762ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
763ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
764ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
765ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
766ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
767--log-file=/var/log/openvswitch/ovs-vswitchd.log
768
769ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
770ovs-vsctl add-port ovsbr0 vhost-user1 /
771-- set Interface vhost-user1 type=dpdkvhostuser -- set Interface /
772vhost-user1 ofport_request=2
773ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
774type=dpdk options:dpdk-devargs=0000:03:00.0 /
775-- set Interface dpdk0 ofport_request=1
776chmod 777 /var/run/openvswitch/vhost-user1
777
778ovs-ofctl del-flows ovsbr0
779ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
780ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Launch
781 QEMU:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
782-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 /
783-enable-kvm -nographic -realtime mlock=on -kernel /mnt/qemu/bzImage /
784-drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,/
785if=virtio,format=raw -m 4096 -object memory-backend-file,id=mem,/
786size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
787-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
788-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
789-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
790mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
791guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
792hugepagesz=2M hugepages=1024 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
793irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
794processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Inside QEMU,
795 configure DPDK: <programlisting>mkdir -p /mnt/huge
796mount -t hugetlbfs nodev /mnt/huge
797modprobe igb_uio
798dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Inside QEMU, run
799 testpmd: <programlisting>testpmd -c 0x3 -n 2 librte_pmd_virtio.so.1.1 /
800-- --burst 64 --disable-rss -i --portmask=0x1 /
801--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 --rxd=512 /
802--port-topology=chained</programlisting>For the <emphasis
803 role="bold">Forwarding test</emphasis>, start testpmd
804 directly:<programlisting>start</programlisting>For the <emphasis
805 role="bold">Termination test</emphasis>, set testpmd to only
806 receive, then start it:<programlisting>set fwd rxonly
807start</programlisting>On target 1, you may start pktgen traffic
808 now:<programlisting>start 0</programlisting>On target 2, use this
809 command to refresh the testpmd display and note the highest
810 values:<programlisting>show port stats 0</programlisting>To stop
811 traffic from pktgen, in order to choose a different frame
812 size:<programlisting>stop 0</programlisting>To clear numbers in
813 testpmd:<programlisting>clear port stats
814show port stats 0</programlisting><table>
815 <title>Results in forwarding mode</title>
816
817 <tgroup cols="8">
818 <tbody>
819 <row>
820 <entry align="center"><emphasis
821 role="bold">Bytes</emphasis></entry>
822
823 <entry align="center"><emphasis role="bold">pktgen pps
824 RX</emphasis></entry>
825
826 <entry align="center"><emphasis role="bold">pktgen pps
827 TX</emphasis></entry>
828
829 <entry align="center"><emphasis role="bold">testpmd pps
830 RX</emphasis></entry>
831
832 <entry align="center"><emphasis role="bold">testpmd pps
833 TX</emphasis></entry>
834
835 <entry align="center"><emphasis role="bold">pktgen MBits/s
836 RX</emphasis></entry>
837
838 <entry align="center"><emphasis role="bold">pktgen MBits/s
839 TX</emphasis></entry>
840
841 <entry align="center"><emphasis role="bold">throughput
842 (%)</emphasis></entry>
843 </row>
844
845 <row>
846 <entry role="bold"><emphasis
847 role="bold">64</emphasis></entry>
848
849 <entry>7926325</entry>
850
851 <entry>14877576</entry>
852
853 <entry>7926515</entry>
854
855 <entry>7926515</entry>
856
857 <entry>5326</entry>
858
859 <entry>9997</entry>
860
861 <entry>53.2</entry>
862 </row>
863
864 <row>
865 <entry><emphasis role="bold">128</emphasis></entry>
866
867 <entry>7502802</entry>
868
869 <entry>8441253</entry>
870
871 <entry>7785983</entry>
872
873 <entry>7494959</entry>
874
875 <entry>8883</entry>
876
877 <entry>9994</entry>
878
879 <entry>88.8</entry>
880 </row>
881
882 <row>
883 <entry role="bold"><emphasis
884 role="bold">256</emphasis></entry>
885
886 <entry>4528631</entry>
887
888 <entry>4528782</entry>
889
890 <entry>4529515</entry>
891
892 <entry>4529515</entry>
893
894 <entry>9999</entry>
895
896 <entry>9999</entry>
897
898 <entry>99.9</entry>
899 </row>
900 </tbody>
901 </tgroup>
902 </table><table>
903 <title>Results in termination mode</title>
904
905 <tgroup cols="5">
906 <tbody>
907 <row>
908 <entry align="center"><emphasis
909 role="bold">Bytes</emphasis></entry>
910
911 <entry align="center"><emphasis role="bold">pktgen pps
912 TX</emphasis></entry>
913
914 <entry align="center"><emphasis role="bold">testpmd pps
915 RX</emphasis></entry>
916
917 <entry align="center"><emphasis role="bold">pktgen MBits/s
918 TX</emphasis></entry>
919
920 <entry align="center"><emphasis role="bold">throughput
921 (%)</emphasis></entry>
922 </row>
923
924 <row>
925 <entry role="bold"><emphasis
926 role="bold">64</emphasis></entry>
927
928 <entry>14877764</entry>
929
930 <entry>8090855</entry>
931
932 <entry>9997</entry>
933
934 <entry>54.3</entry>
935 </row>
936
937 <row>
938 <entry><emphasis role="bold">128</emphasis></entry>
939
940 <entry>8441309</entry>
941
942 <entry>8082971</entry>
943
944 <entry>9994</entry>
945
946 <entry>95.7</entry>
947 </row>
948
949 <row>
950 <entry role="bold"><emphasis
951 role="bold">256</emphasis></entry>
952
953 <entry>4528867</entry>
954
955 <entry>4528780</entry>
956
957 <entry>9999</entry>
958
959 <entry>99.9</entry>
960 </row>
961 </tbody>
962 </tgroup>
963 </table></para>
964 </section>
965 </section>
966
967 <section id="usecase-six">
968 <title>Forward traffic between two VMs</title>
969
970 <para>Benchmark a combo test using two virtual machines, the first
971 with traffic forwarding to the second, which terminates it.</para>
972
973 <para>Measurements are made in:</para>
974
975 <itemizedlist>
976 <listitem>
977 <para>pktgen TX in pps and Mbits/s</para>
978 </listitem>
979
980 <listitem>
981 <para>testpmd TX and RX pps in VM1</para>
982 </listitem>
983
984 <listitem>
985 <para>testpmd RX pps in VM2</para>
986 </listitem>
987
988 <listitem>
989 <para>throughput in percents, by dividing<emphasis role="bold">
990 VM2 testpmd RX pps</emphasis> by <emphasis role="bold">pktgen TX
991 pps</emphasis></para>
992 </listitem>
993 </itemizedlist>
994
995 <section id="targetone-usecase-five">
996 <title>Test Setup for Target 1</title>
997
998 <para>Start by doing the following:</para>
999
1000 <para>SSD boot using the following <literal>grub.cfg</literal>
1001 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1002isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1003clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1004processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1005intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1006hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
1007 Services:<programlisting>killall ovsdb-server ovs-vswitchd
1008rm -rf /etc/openvswitch/*
1009mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
1010mount -t hugetlbfs nodev /mnt/huge
1011modprobe igb_uio
1012dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
1013 pktgen:<programlisting>cd /usr/share/apps/pktgen/
1014./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
1015-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
1016 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
1017 </section>
1018
1019 <section id="targettwo-usecase-six">
1020 <title>Test Setup for Target 2</title>
1021
1022 <para>Start by doing the following:</para>
1023
1024 <para>SSD boot using the following <literal>grub.cfg</literal>
1025 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1026isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1027clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1028processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1029intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1030hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
1031 Services:<programlisting>killall ovsdb-server ovs-vswitchd
1032rm -rf /etc/openvswitch/*
1033mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
1034mount -t hugetlbfs nodev /mnt/huge
1035modprobe igb_uio
1036dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
1037 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
1038ovsdb-tool create /etc/openvswitch/conf.db /
1039/usr/share/openvswitch/vswitch.ovsschema
1040ovsdb-server --remote=punix:$DB_SOCK /
1041--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
1042ovs-vsctl --no-wait init
1043ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
1044ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
1045ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
1046ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
1047ovs-vswitchd unix:$DB_SOCK --pidfile /
1048--detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
1049
1050
1051ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
1052ovs-vsctl add-port ovsbr0 dpdk0 /
1053-- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1
1054ovs-vsctl add-port ovsbr0 vhost-user1 /
1055-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=2
1056ovs-vsctl add-port ovsbr0 vhost-user2 /
1057-- set Interface vhost-user2 type=dpdkvhostuser ofport_request=3
1058
1059
1060ovs-ofctl del-flows ovsbr0
1061ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
1062ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Launch
1063 first QEMU instance, VM1:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M q35 /
1064-smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 -enable-kvm /
1065-nographic -realtime mlock=on -kernel /home/root/qemu/bzImage /
1066-drive file=/home/root/qemu/enea-nfv-access-guest-qemux86-64.ext4,/
1067if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,/
1068size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
1069-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
1070-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
1071-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
1072mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
1073guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
1074hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
1075irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
1076processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Connect to
1077 Target 2 through a new SSH session and run a second QEMU instance
1078 (to get its own console, separate from instance VM1). We shall call
1079 this VM2:<programlisting>taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
1080-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 /
1081-enable-kvm -nographic -realtime mlock=on -kernel /home/root/qemu2/bzImage /
1082-drive file=/home/root/qemu2/enea-nfv-access-guest-qemux86-64.ext4,/
1083if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,size=2048M,/
1084mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc /
1085-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 /
1086-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce /
1087-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,/
1088mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
1089guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
1090hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
1091irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
1092processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Configure DPDK
1093 inside VM1:<programlisting>mkdir -p /mnt/huge
1094mount -t hugetlbfs nodev /mnt/huge
1095modprobe igb_uio
1096dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
1097 VM1:<programlisting>testpmd -c 0x3 -n 2 librte_pmd_virtio.so.1.1 /
1098-- --burst 64 --disable-rss -i /
1099--portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
1100--txq=1 --txd=512 --rxd=512 --port-topology=chained</programlisting>Start
1101 testpmd inside VM1:<programlisting>start</programlisting>Configure
1102 DPDK inside VM2:<programlisting>mkdir -p /mnt/huge
1103mount -t hugetlbfs nodev /mnt/huge
1104modprobe igb_uio
1105dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
1106 VM2:<programlisting>testpmd -c 0x3 -n 2 librte_pmd_virtio.so.1.1 /
1107-- --burst 64 --disable-rss -i --portmask=0x1 /
1108--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 /
1109--rxd=512 --port-topology=chained</programlisting>Set VM2 for termination and
1110 start testpmd:<programlisting>set fwd rxonly
1111start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use
1112 this command to refresh testpmd display in VM1 and VM2 and note the
1113 highest values:<programlisting>show port stats 0</programlisting>To
1114 stop traffic from pktgen, in order to choose a different frame
1115 size:<programlisting>stop 0</programlisting>To clear numbers in
1116 testpmd:<programlisting>clear port stats
1117show port stats 0</programlisting>For VM1, we record the stats relevant for
1118 <emphasis role="bold">forwarding</emphasis>:</para>
1119
1120 <itemizedlist>
1121 <listitem>
1122 <para>RX, TX in pps</para>
1123 </listitem>
1124 </itemizedlist>
1125
1126 <para>Only Rx-pps and Tx-pps numbers are important here, they change
1127 every time stats are displayed as long as there is traffic. Run the
1128 command a few times and pick the best (maximum) values seen.</para>
1129
1130 <para>For VM2, we record the stats relevant for <emphasis
1131 role="bold">termination</emphasis>:</para>
1132
1133 <itemizedlist>
1134 <listitem>
1135 <para>RX in pps (TX will be 0)</para>
1136 </listitem>
1137 </itemizedlist>
1138
1139 <para>For pktgen, we record only the TX side, because flow is
1140 terminated, with no RX traffic reaching pktgen:</para>
1141
1142 <itemizedlist>
1143 <listitem>
1144 <para>TX in pps and Mbit/s</para>
1145 </listitem>
1146 </itemizedlist>
1147
1148 <table>
1149 <title>Results in forwarding mode</title>
1150
1151 <tgroup cols="7">
1152 <tbody>
1153 <row>
1154 <entry align="center"><emphasis
1155 role="bold">Bytes</emphasis></entry>
1156
1157 <entry align="center"><emphasis role="bold">pktgen pps
1158 TX</emphasis></entry>
1159
1160 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1161 RX</emphasis></entry>
1162
1163 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1164 TX</emphasis></entry>
1165
1166 <entry align="center"><emphasis role="bold">VM2 testpmd pps
1167 RX</emphasis></entry>
1168
1169 <entry align="center"><emphasis role="bold">pktgen MBits/s
1170 TX</emphasis></entry>
1171
1172 <entry align="center"><emphasis role="bold">throughput
1173 (%)</emphasis></entry>
1174 </row>
1175
1176 <row>
1177 <entry role="bold"><emphasis
1178 role="bold">64</emphasis></entry>
1179
1180 <entry>14877757</entry>
1181
1182 <entry>7712835</entry>
1183
1184 <entry>6024320</entry>
1185
1186 <entry>6015525</entry>
1187
1188 <entry>9997</entry>
1189
1190 <entry>40.0</entry>
1191 </row>
1192
1193 <row>
1194 <entry><emphasis role="bold">128</emphasis></entry>
1195
1196 <entry>8441333</entry>
1197
1198 <entry>7257432</entry>
1199
1200 <entry>5717540</entry>
1201
1202 <entry>5716752</entry>
1203
1204 <entry>9994</entry>
1205
1206 <entry>67.7</entry>
1207 </row>
1208
1209 <row>
1210 <entry role="bold"><emphasis
1211 role="bold">256</emphasis></entry>
1212
1213 <entry>4528865</entry>
1214
1215 <entry>4528717</entry>
1216
1217 <entry>4528717</entry>
1218
1219 <entry>4528621</entry>
1220
1221 <entry>9999</entry>
1222
1223 <entry>99.9</entry>
1224 </row>
1225 </tbody>
1226 </tgroup>
1227 </table>
1228 </section>
1229 </section>
1230
1231 <section id="pxe-config-vm">
1232 <title>SR-IOV in Virtual Machines</title>
1233
1234 <para>PCI passthrough tests using pktgen and testpmd in virtual
1235 machines.</para>
1236
1237 <para>pktgen[DPDK]VM - PHY - VM[DPDK] testpmd.</para>
1238
1239 <para>Measurements:</para>
1240
1241 <itemizedlist>
1242 <listitem>
1243 <para>pktgen to testpmd in <emphasis
1244 role="bold">forwarding</emphasis> mode.</para>
1245 </listitem>
1246
1247 <listitem>
1248 <para>pktgen to testpmd in <emphasis
1249 role="bold">termination</emphasis> mode.</para>
1250 </listitem>
1251 </itemizedlist>
1252
1253 <section id="test-setup-target-four">
1254 <title>Test Setup</title>
1255
1256 <para>SSD boot using the following <literal>grub.cfg</literal>
1257 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1258isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1259clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1260processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1261intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1262hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Stop
1263 other services and mount hugepages: <programlisting>systemctl stop openvswitch
1264mkdir -p /mnt/huge
1265mount -t hugetlbfs hugetlbfs /mnt/huge</programlisting>Configure SR-IOV
1266 interfaces:<programlisting>/usr/share/usertools/dpdk-devbind.py --bind=ixgbe 0000:03:00.0
1267echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
1268ifconfig eno3 10.0.0.1
1269modprobe vfio_pci
1270/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
1271/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.2
1272ip link set eno3 vf 0 mac 0c:c4:7a:E5:0F:48
1273ip link set eno3 vf 1 mac 0c:c4:7a:BF:52:E7</programlisting>Launch two QEMU
1274 instances: <programlisting>taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
1275q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 -enable-kvm /
1276-nographic -kernel /mnt/qemu/bzImage /
1277-drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,if=virtio,/
1278format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
1279share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.0 /
1280-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
1281isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
1282intel_pstate=disable intel_idle.max_cstate=0 /
1283processor.max_cstate=0 mce=ignore_ce audit=0'
1284
1285
1286taskset -c 2,3 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
1287q35 -smp cores=2,sockets=1 -vcpu 0,affinity=2 -vcpu 1,affinity=3 -enable-kvm /
1288-nographic -kernel /mnt/qemu/bzImage /
1289-drive file=/mnt/qemu/enea-nfv-access-guest-qemux86-64.ext4,if=virtio,/
1290format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
1291share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.2 /
1292-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
1293isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
1294intel_pstate=disable intel_idle.max_cstate=0 processor.max_cstate=0 /
1295mce=ignore_ce audit=0'</programlisting>In the first VM, mount hugepages and
1296 start pktgen:<programlisting>mkdir -p /mnt/huge &amp;&amp; \
1297mount -t hugetlbfs hugetlbfs /mnt/huge
1298modprobe igb_uio
1299/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
1300cd /usr/share/apps/pktgen
1301./pktgen -c 0x3 -- -P -m "1.0"</programlisting>In the pktgen console set the
1302 MAC of the destination and start generating
1303 packages:<programlisting>set mac 0 0C:C4:7A:BF:52:E7
1304str</programlisting>In the second VM, mount hugepages and start
1305 testpmd:<programlisting>mkdir -p /mnt/huge &amp;&amp; \
1306mount -t hugetlbfs hugetlbfs /mnt/huge
1307modprobe igb_uio
1308/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
1309testpmd -c 0x3 -n 2 -- -i --txd=512 --rxd=512 --port-topology=chained /
1310--eth-peer=0,0c:c4:7a:e5:0f:48</programlisting>In order to enable <emphasis
1311 role="bold">forwarding</emphasis> mode, in the testpmd console,
1312 run:<programlisting>set fwd mac
1313start</programlisting>In order to enable <emphasis
1314 role="bold">termination</emphasis> mode, in the testpmd console,
1315 run:<programlisting>set fwd rxonly
1316start</programlisting><table>
1317 <title>Results in forwarding mode</title>
1318
1319 <tgroup cols="5">
1320 <tbody>
1321 <row>
1322 <entry align="center"><emphasis
1323 role="bold">Bytes</emphasis></entry>
1324
1325 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1326 TX</emphasis></entry>
1327
1328 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1329 RX</emphasis></entry>
1330
1331 <entry align="center"><emphasis role="bold">VM2 testpmd
1332 pps RX</emphasis></entry>
1333
1334 <entry align="center"><emphasis role="bold">VM2 testpmd
1335 pps TX</emphasis></entry>
1336 </row>
1337
1338 <row>
1339 <entry role="bold"><emphasis
1340 role="bold">64</emphasis></entry>
1341
1342 <entry>7102096</entry>
1343
1344 <entry>7101897</entry>
1345
1346 <entry>7103853</entry>
1347
1348 <entry>7103793</entry>
1349 </row>
1350
1351 <row>
1352 <entry><emphasis role="bold">128</emphasis></entry>
1353
1354 <entry>5720016</entry>
1355
1356 <entry>5720256</entry>
1357
1358 <entry>5722081</entry>
1359
1360 <entry>5722083</entry>
1361 </row>
1362
1363 <row>
1364 <entry role="bold"><emphasis
1365 role="bold">256</emphasis></entry>
1366
1367 <entry>3456619</entry>
1368
1369 <entry>3456164</entry>
1370
1371 <entry>3456319</entry>
1372
1373 <entry>3456321</entry>
1374 </row>
1375
1376 <row>
1377 <entry role="bold"><emphasis
1378 role="bold">512</emphasis></entry>
1379
1380 <entry>1846671</entry>
1381
1382 <entry>1846628</entry>
1383
1384 <entry>1846652</entry>
1385
1386 <entry>1846657</entry>
1387 </row>
1388
1389 <row>
1390 <entry role="bold"><emphasis
1391 role="bold">1024</emphasis></entry>
1392
1393 <entry>940799</entry>
1394
1395 <entry>940748</entry>
1396
1397 <entry>940788</entry>
1398
1399 <entry>940788</entry>
1400 </row>
1401
1402 <row>
1403 <entry role="bold"><emphasis
1404 role="bold">1500</emphasis></entry>
1405
1406 <entry>649594</entry>
1407
1408 <entry>649526</entry>
1409
1410 <entry>649563</entry>
1411
1412 <entry>649563</entry>
1413 </row>
1414 </tbody>
1415 </tgroup>
1416 </table><table>
1417 <title>Results in termination mode</title>
1418
1419 <tgroup cols="3">
1420 <tbody>
1421 <row>
1422 <entry align="center"><emphasis
1423 role="bold">Bytes</emphasis></entry>
1424
1425 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1426 TX</emphasis></entry>
1427
1428 <entry align="center"><emphasis role="bold">VM2 testpmd
1429 RX</emphasis></entry>
1430 </row>
1431
1432 <row>
1433 <entry role="bold"><emphasis
1434 role="bold">64</emphasis></entry>
1435
1436 <entry>14202904</entry>
1437
1438 <entry>14203944</entry>
1439 </row>
1440
1441 <row>
1442 <entry><emphasis role="bold">128</emphasis></entry>
1443
1444 <entry>8434766</entry>
1445
1446 <entry>8437525</entry>
1447 </row>
1448
1449 <row>
1450 <entry role="bold"><emphasis
1451 role="bold">256</emphasis></entry>
1452
1453 <entry>4532131</entry>
1454
1455 <entry>4532348</entry>
1456 </row>
1457
1458 <row>
1459 <entry><emphasis role="bold">512</emphasis></entry>
1460
1461 <entry>2349344</entry>
1462
1463 <entry>2349032</entry>
1464 </row>
1465
1466 <row>
1467 <entry><emphasis role="bold">1024</emphasis></entry>
1468
1469 <entry>1197293</entry>
1470
1471 <entry>1196699</entry>
1472 </row>
1473
1474 <row>
1475 <entry><emphasis role="bold">1500</emphasis></entry>
1476
1477 <entry>822321</entry>
1478
1479 <entry>822276</entry>
1480 </row>
1481 </tbody>
1482 </tgroup>
1483 </table></para>
1484 </section>
1485 </section>
1486 </section>
1487 </section>
1488</chapter> \ No newline at end of file