summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMiruna Paun <Miruna.Paun@enea.com>2017-07-03 19:45:58 +0200
committerMiruna Paun <Miruna.Paun@enea.com>2017-07-03 19:45:58 +0200
commitac3303db957814b6202530af6ce2e3da47525b5d (patch)
tree8db47dd1f8db216bac1e3c1a0891856396df68a7
parenteeb28fcfbb693e00df2f5d1fd100dcbb548179fc (diff)
downloadel_releases-virtualization-ac3303db957814b6202530af6ce2e3da47525b5d.tar.gz
Updating the benchmarks chapter with all the needed info
LXCR-7844 Signed-off-by: Miruna Paun <Miruna.Paun@enea.com>
-rw-r--r--doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml1037
1 files changed, 1034 insertions, 3 deletions
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
index 5d6e268..7155e44 100644
--- a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
+++ b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
@@ -1,14 +1,1045 @@
1<?xml version="1.0" encoding="ISO-8859-1"?> 1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" 2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> 3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter condition="hidden" id="benchmarks"> 4<chapter id="benchmarks">
5 <title>Benchmarks</title> 5 <title>Benchmarks</title>
6 6
7 <para></para> 7 <para></para>
8 8
9 <section id="relinfo-getting-pre-built-images"> 9 <section id="hw-setup">
10 <title></title> 10 <title>Hardware Setup</title>
11 11
12 <para></para> 12 <para></para>
13 </section> 13 </section>
14
15 <section id="bios">
16 <title>Bios</title>
17
18 <para></para>
19 </section>
20
21 <section id="use-cases">
22 <title>Use Cases</title>
23
24 <section id="docker-benchmarks">
25 <title>Docker related benchmarks</title>
26
27 <section>
28 <title>Use Case - Forward traffic in docker</title>
29
30 <para>Benchmarking traffic forwarding using testpmd in a Docker
31 container.</para>
32
33 <para>Pktgen is used to generate UDP traffic that will reach testpmd,
34 running in a docker image. It will then be forwarded back to source on
35 the return trip (Forwarding). </para>
36
37 <para>This test measures:</para>
38
39 <itemizedlist>
40 <listitem>
41 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
42 </listitem>
43
44 <listitem>
45 <para>testpmd TX, RX in packets per second (pps)</para>
46 </listitem>
47
48 <listitem>
49 <para>divide testpmd RX / pktgen TX in pps to obtain throughput in
50 percentages (%)</para>
51 </listitem>
52 </itemizedlist>
53
54 <section id="usecase-one">
55 <title>Test Setup for Target 1</title>
56
57 <para>Start by following the steps below:</para>
58
59 <para>SSD boot using the following <literal>grub.cfg</literal>
60 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
61isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
62clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
63processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
64intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
65hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
66
67 <para>Kill unnecessary services:<programlisting>killall ovsdb-server ovs-vswitchd
68rm -rf /etc/openvswitch/*
69mkdir -p /var/run/openvswitch</programlisting>Mount hugepages and configure
70 DPDK:<programlisting>mkdir -p /mnt/huge
71mount -t hugetlbfs nodev /mnt/huge
72modprobe igb_uio
73dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
74 pktgen:<programlisting>cd /usr/share/apps/pktgen/
75./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>In the pktgen console
76 run:<programlisting>str</programlisting>To change framesize for
77 pktgen, from [64, 128, 256, 512]:<programlisting>set 0 size &amp;lt;number&amp;gt;</programlisting></para>
78 </section>
79
80 <section id="usecase-two">
81 <title>Test Setup for Target 2</title>
82
83 <para>Start by following the steps below:</para>
84
85 <para>SSD boot using the following <literal>grub.cfg</literal>
86 entry:</para>
87
88 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
89isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
90clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
91processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
92intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
93hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
94
95 <para>It is expected to have docker/guest image on target. Configure
96 the OVS bridge:<programlisting># OVS old config clean-up
97killall ovsdb-server ovs-vswitchd
98rm -rf /etc/openvswitch/*
99mkdir -p /var/run/openvswitch
100
101# Mount hugepages and bind interfaces to dpdk
102mkdir -p /mnt/huge
103mount -t hugetlbfs nodev /mnt/huge
104modprobe igb_uio
105dpdk-devbind --bind=igb_uio 0000:03:00.0
106
107# configure openvswitch with DPDK
108export DB_SOCK=/var/run/openvswitch/db.sock
109ovsdb-tool create /etc/openvswitch/conf.db /
110/usr/share/openvswitch/vswitch.ovsschema
111ovsdb-server --remote=punix:$DB_SOCK /
112--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
113ovs-vsctl --no-wait init
114ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
115ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
116ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
117ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
118ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
119--log-file=/var/log/openvswitch/ovs-vswitchd.log
120
121ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
122ovs-vsctl add-port ovsbr0 vhost-user1 /
123-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
124ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface /
125dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=2
126
127# configure static flows
128ovs-ofctl del-flows ovsbr0
129ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
130ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
131 Docker container:<programlisting>docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
132 the Docker container:<programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ /
133-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
134 application in Docker:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
135--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
136-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
137--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
138--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>To
139 start traffic <emphasis role="bold">forwarding</emphasis>, run the
140 following command in testpmd CLI:<programlisting>start</programlisting>To
141 start traffic but in <emphasis role="bold">termination</emphasis>
142 mode (no traffic sent on TX), run following command in testpmd
143 CLI:<programlisting>set fwd rxonly
144start</programlisting><table>
145 <title>Results in forwarding mode</title>
146
147 <tgroup cols="8">
148 <tbody>
149 <row>
150 <entry align="center"><emphasis
151 role="bold">Bytes</emphasis></entry>
152
153 <entry align="center"><emphasis role="bold">pktgen pps
154 TX</emphasis></entry>
155
156 <entry align="center"><emphasis role="bold">pktgen MBits/s
157 TX</emphasis></entry>
158
159 <entry align="center"><emphasis role="bold">pktgen pps
160 RX</emphasis></entry>
161
162 <entry align="center"><emphasis role="bold">pktgen MBits/s
163 RX</emphasis></entry>
164
165 <entry align="center"><emphasis role="bold">testpmd pps
166 RX</emphasis></entry>
167
168 <entry align="center"><emphasis role="bold">testpmd pps
169 TX</emphasis></entry>
170
171 <entry align="center"><emphasis role="bold">throughput
172 (%)</emphasis></entry>
173 </row>
174
175 <row>
176 <entry role="bold"><emphasis
177 role="bold">64</emphasis></entry>
178
179 <entry>14890993</entry>
180
181 <entry>10006</entry>
182
183 <entry>7706039</entry>
184
185 <entry>5178</entry>
186
187 <entry>7692807</entry>
188
189 <entry>7692864</entry>
190
191 <entry>51.74%</entry>
192 </row>
193
194 <row>
195 <entry><emphasis role="bold">128</emphasis></entry>
196
197 <entry>8435104</entry>
198
199 <entry>9999</entry>
200
201 <entry>7689458</entry>
202
203 <entry>9060</entry>
204
205 <entry>7684962</entry>
206
207 <entry>7684904</entry>
208
209 <entry>90.6%</entry>
210 </row>
211
212 <row>
213 <entry role="bold"><emphasis
214 role="bold">256</emphasis></entry>
215
216 <entry>4532384</entry>
217
218 <entry>9999</entry>
219
220 <entry>4532386</entry>
221
222 <entry>9998</entry>
223
224 <entry>4532403</entry>
225
226 <entry>4532403</entry>
227
228 <entry>99.9%</entry>
229 </row>
230 </tbody>
231 </tgroup>
232 </table><table>
233 <title>Results in termination mode</title>
234
235 <tgroup cols="4">
236 <tbody>
237 <row>
238 <entry align="center"><emphasis
239 role="bold">Bytes</emphasis></entry>
240
241 <entry align="center"><emphasis role="bold">pktgen pps
242 TX</emphasis></entry>
243
244 <entry align="center"><emphasis role="bold">testpmd pps
245 RX</emphasis></entry>
246
247 <entry align="center"><emphasis role="bold">throughput
248 (%)</emphasis></entry>
249 </row>
250
251 <row>
252 <entry role="bold"><emphasis
253 role="bold">64</emphasis></entry>
254
255 <entry>14890993</entry>
256
257 <entry>7330403</entry>
258
259 <entry>49,2%</entry>
260 </row>
261
262 <row>
263 <entry><emphasis role="bold">128</emphasis></entry>
264
265 <entry>8435104</entry>
266
267 <entry>7330379</entry>
268
269 <entry>86,9%</entry>
270 </row>
271
272 <row>
273 <entry role="bold"><emphasis
274 role="bold">256</emphasis></entry>
275
276 <entry>4532484</entry>
277
278 <entry>4532407</entry>
279
280 <entry>99,9%</entry>
281 </row>
282 </tbody>
283 </tgroup>
284 </table></para>
285 </section>
286 </section>
287
288 <section id="usecase-three-four">
289 <title>Use Case - Forward traffic from Docker to another Docker on the
290 same host</title>
291
292 <para>Benchmark a combo test using testpmd running in two Docker
293 instances, one which Forwards traffic to the second one, which
294 Terminates it.</para>
295
296 <para>Packets are generated with pktgen and TX-d to the first testpmd,
297 which will RX and Forward them to the second testpmd, which will RX
298 and terminate them.</para>
299
300 <para>Measurements are made in:</para>
301
302 <itemizedlist>
303 <listitem>
304 <para>pktgen TX in pps and Mbits/s</para>
305 </listitem>
306
307 <listitem>
308 <para>testpmd TX and RX pps in Docker1</para>
309 </listitem>
310
311 <listitem>
312 <para>testpmd RX pps in Docker2</para>
313 </listitem>
314 </itemizedlist>
315
316 <para>Throughput found as a percent, by dividing Docker2 <emphasis
317 role="bold">testpmd RX pps</emphasis> by <emphasis role="bold">pktgen
318 TX pps</emphasis>.</para>
319
320 <section id="target-one-usecase-three">
321 <title>Test Setup for Target 1</title>
322
323 <para>Start by following the steps below:</para>
324
325 <para>SSD boot using the following <literal>grub.cfg</literal>
326 entry:</para>
327
328 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
329isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
330clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
331processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
332intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
333hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
334
335 <para>Configure DPDK:<programlisting>mkdir -p /mnt/huge
336mount -t hugetlbfs nodev /mnt/huge
337modprobe igb_uio
338dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
339 pktgen:<programlisting>cd /usr/share/apps/pktgen/
340./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>Choose one of the
341 values from [64, 128, 256, 512] to change the packet
342 size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
343 </section>
344
345 <section id="target-two-usecase-four">
346 <title>Test Setup for Target 2</title>
347
348 <para>Start by following the steps below:</para>
349
350 <para>SSD boot using the following <literal>grub.cfg</literal>
351 entry:</para>
352
353 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
354isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
355clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
356processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 /
357iommu=pt intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
358hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
359
360 <para><programlisting>killall ovsdb-server ovs-vswitchd
361rm -rf /etc/openvswitch/*
362mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
363mount -t hugetlbfs nodev /mnt/huge
364modprobe igb_uio
365dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure the OVS
366 bridge:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
367ovsdb-tool create /etc/openvswitch/conf.db /
368/usr/share/openvswitch/vswitch.ovsschema
369ovsdb-server --remote=punix:$DB_SOCK /
370--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
371ovs-vsctl --no-wait init
372ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
373ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xcc
374ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
375ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
376ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
377--log-file=/var/log/openvswitch/ovs-vswitchd.log
378ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
379ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface /
380vhost-user1 type=dpdkvhostuser ofport_request=1
381ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface /
382vhost-user2 type=dpdkvhostuser ofport_request=2
383ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
384type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=3
385ovs-ofctl del-flows ovsbr0
386ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2
387ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
388 Docker container:<programlisting>docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
389 the first Docker:<programlisting>docker run -it --rm --cpuset-cpus=4,5 /
390-v /var/run/openvswitch/:/var/run/openvswitch/ /
391-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
392 application in Docker1:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
393--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
394-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
395--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
396--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Configure
397 it in termination mode:<programlisting>set fwd rxonly</programlisting>Run
398 the testpmd application:<programlisting>start</programlisting>Open a
399 new console to the host and start the second docker
400 instance:<programlisting>docker run -it --rm --cpuset-cpus=0,1 -v /var/run/openvswitch/:/var/run/openvswitch/ /
401-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>In the second
402 container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci /
403--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
404-d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlan</programlisting>Run
405 the TestPmd application in the second docker:<programlisting>testpmd -c 0x3 -n 2 --file-prefix prog2 --socket-mem 512 --no-pci /
406--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
407-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
408--disable-rss -i --portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
409--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>In
410 the testpmd shell, run:<programlisting>start</programlisting>Start
411 pktgen traffic by running the following command in pktgen
412 CLI:<programlisting>start 0</programlisting>To record traffic
413 results:<programlisting>show port stats 0</programlisting>This
414 should be used in testpmd applications.</para>
415
416 <table>
417 <title>Results </title>
418
419 <tgroup cols="5">
420 <tbody>
421 <row>
422 <entry align="center"><emphasis
423 role="bold">Bytes</emphasis></entry>
424
425 <entry align="center"><emphasis role="bold">Target 1 -
426 pktgen pps TX</emphasis></entry>
427
428 <entry align="center"><emphasis role="bold">Target 2 -
429 (forwarding) testpmd pps RX</emphasis></entry>
430
431 <entry align="center"><emphasis role="bold">Target 2 -
432 (forwarding) testpmd pps TX</emphasis></entry>
433
434 <entry align="center"><emphasis role="bold">Target 2 -
435 (termination) testpmd pps RX</emphasis></entry>
436 </row>
437
438 <row>
439 <entry role="bold"><emphasis
440 role="bold">64</emphasis></entry>
441
442 <entry>14844628</entry>
443
444 <entry>5643565</entry>
445
446 <entry>3459922</entry>
447
448 <entry>3457326</entry>
449 </row>
450
451 <row>
452 <entry><emphasis role="bold">128</emphasis></entry>
453
454 <entry>8496962</entry>
455
456 <entry>5667860</entry>
457
458 <entry>3436811</entry>
459
460 <entry>3438918</entry>
461 </row>
462
463 <row>
464 <entry role="bold"><emphasis
465 role="bold">256</emphasis></entry>
466
467 <entry>4532372</entry>
468
469 <entry>4532362</entry>
470
471 <entry>3456623</entry>
472
473 <entry>3457115</entry>
474 </row>
475
476 <row>
477 <entry><emphasis role="bold">512</emphasis></entry>
478
479 <entry>2367641</entry>
480
481 <entry>2349450</entry>
482
483 <entry>2349450</entry>
484
485 <entry>2349446</entry>
486 </row>
487 </tbody>
488 </tgroup>
489 </table>
490 </section>
491 </section>
492 </section>
493
494 <section id="vm-benchmarks">
495 <title>VM related benchmarks</title>
496
497 <section id="usecase-four">
498 <title>Use Case - Forward/termination traffic in one VM</title>
499
500 <para>Benchmarking traffic (UDP) forwarding and termination using
501 testpmd in a virtual machine. </para>
502
503 <para>The Pktgen application is used to generate traffic that will
504 reach testpmd running on a virtual machine, and be forwarded back to
505 source on the return trip. With the same setup a second measurement
506 will be done with traffic termination in the virtual machine. </para>
507
508 <para>This test case measures:</para>
509
510 <itemizedlist>
511 <listitem>
512 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
513 </listitem>
514
515 <listitem>
516 <para>testpmd TX, RX in packets per second (pps)</para>
517 </listitem>
518
519 <listitem>
520 <para>divide <emphasis role="bold">testpmd RX</emphasis> by
521 <emphasis role="bold">pktgen TX</emphasis> in pps to obtain the
522 throughput in percentages (%)</para>
523 </listitem>
524 </itemizedlist>
525
526 <section id="targetone-usecasefour">
527 <title>Test Setup for Target 1</title>
528
529 <para>Start with the steps below:</para>
530
531 <para>SSD boot using the following <literal>grub.cfg
532 </literal>entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
533isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
534clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
535processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
536intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
537hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
538
539 <para>Kill unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
540rm -rf /etc/openvswitch/*
541mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
542mount -t hugetlbfs nodev /mnt/huge
543modprobe igb_uio
544dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
545 pktgen:<programlisting>cd /usr/share/apps/pktgen/
546./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
547-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
548 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
549 </section>
550
551 <section id="targettwo-usecasefive">
552 <title>Test Setup for Target 2</title>
553
554 <para>Start by following the steps below:</para>
555
556 <para>SSD boot using the following <literal>grub.cfg</literal>
557 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
558isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
559clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
560processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
561intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
562hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
563 unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
564rm -rf /etc/openvswitch/*
565mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
566mount -t hugetlbfs nodev /mnt/huge
567modprobe igb_uio
568dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
569 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
570ovsdb-tool create /etc/openvswitch/conf.db /
571/usr/share/openvswitch/vswitch.ovsschema
572ovsdb-server --remote=punix:$DB_SOCK /
573--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
574ovs-vsctl --no-wait init
575ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
576ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
577ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
578ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
579ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
580--log-file=/var/log/openvswitch/ovs-vswitchd.log
581
582ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
583ovs-vsctl add-port ovsbr0 vhost-user1 /
584-- set Interface vhost-user1 type=dpdkvhostuser -- set Interface /
585vhost-user1 ofport_request=2
586ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
587type=dpdk options:dpdk-devargs=0000:03:00.0 /
588-- set Interface dpdk0 ofport_request=1
589chmod 777 /var/run/openvswitch/vhost-user1
590
591ovs-ofctl del-flows ovsbr0
592ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
593ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Launch
594 QEMU:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
595-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 /
596-enable-kvm -nographic -realtime mlock=on -kernel /mnt/qemu/bzImage /
597-drive file=/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
598if=virtio,format=raw -m 4096 -object memory-backend-file,id=mem,/
599size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
600-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
601-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
602-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
603mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
604guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
605hugepagesz=2M hugepages=1024 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
606irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
607processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Inside QEMU,
608 configure DPDK: <programlisting>mkdir -p /mnt/huge
609mount -t hugetlbfs nodev /mnt/huge
610modprobe igb_uio
611dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Inside QEMU, run
612 testpmd: <programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
613-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
614--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 --rxd=512 /
615--txqflags=0xf00 --port-topology=chained</programlisting>For the <emphasis
616 role="bold">Forwarding test</emphasis>, start testpmd
617 directly:<programlisting>start</programlisting>For the <emphasis
618 role="bold">Termination test</emphasis>, set testpmd to only
619 receive, then start it:<programlisting>set fwd rxonly
620start</programlisting>On target 1, you may start pktgen traffic
621 now:<programlisting>start 0</programlisting>On target 2, use this
622 command to refresh the testpmd display and note the highest
623 values:<programlisting>show port stats 0</programlisting>To stop
624 traffic from pktgen, in order to choose a different frame
625 size:<programlisting>stop 0</programlisting>To clear numbers in
626 testpmd:<programlisting>clear port stats
627show port stats 0</programlisting><table>
628 <title>Results in forwarding mode</title>
629
630 <tgroup cols="8">
631 <tbody>
632 <row>
633 <entry align="center"><emphasis
634 role="bold">Bytes</emphasis></entry>
635
636 <entry align="center"><emphasis role="bold">pktgen pps
637 RX</emphasis></entry>
638
639 <entry align="center"><emphasis role="bold">pktgen pps
640 TX</emphasis></entry>
641
642 <entry align="center"><emphasis role="bold">testpmd pps
643 RX</emphasis></entry>
644
645 <entry align="center"><emphasis role="bold">testpmd pps
646 TX</emphasis></entry>
647
648 <entry align="center"><emphasis role="bold">pktgen MBits/s
649 RX</emphasis></entry>
650
651 <entry align="center"><emphasis role="bold">pktgen MBits/s
652 TX</emphasis></entry>
653
654 <entry align="center"><emphasis role="bold">throughput
655 (%)</emphasis></entry>
656 </row>
657
658 <row>
659 <entry role="bold"><emphasis
660 role="bold">64</emphasis></entry>
661
662 <entry>7755769</entry>
663
664 <entry>14858714</entry>
665
666 <entry>7755447</entry>
667
668 <entry>7755447</entry>
669
670 <entry>5207</entry>
671
672 <entry>9984</entry>
673
674 <entry>52.2</entry>
675 </row>
676
677 <row>
678 <entry><emphasis role="bold">128</emphasis></entry>
679
680 <entry>7714626</entry>
681
682 <entry>8435184</entry>
683
684 <entry>7520349</entry>
685
686 <entry>6932520</entry>
687
688 <entry>8204</entry>
689
690 <entry>9986</entry>
691
692 <entry>82.1</entry>
693 </row>
694
695 <row>
696 <entry role="bold"><emphasis
697 role="bold">256</emphasis></entry>
698
699 <entry>4528847</entry>
700
701 <entry>4528854</entry>
702
703 <entry>4529030</entry>
704
705 <entry>4529034</entry>
706
707 <entry>9999</entry>
708
709 <entry>9999</entry>
710
711 <entry>99.9</entry>
712 </row>
713 </tbody>
714 </tgroup>
715 </table><table>
716 <title>Results in termination mode</title>
717
718 <tgroup cols="5">
719 <tbody>
720 <row>
721 <entry align="center"><emphasis
722 role="bold">Bytes</emphasis></entry>
723
724 <entry align="center"><emphasis role="bold">pktgen pps
725 TX</emphasis></entry>
726
727 <entry align="center"><emphasis role="bold">testpmd pps
728 RX</emphasis></entry>
729
730 <entry align="center"><emphasis role="bold">pktgen MBits/s
731 TX</emphasis></entry>
732
733 <entry align="center"><emphasis role="bold">throughput
734 (%)</emphasis></entry>
735 </row>
736
737 <row>
738 <entry role="bold"><emphasis
739 role="bold">64</emphasis></entry>
740
741 <entry>15138992</entry>
742
743 <entry>7290663</entry>
744
745 <entry>10063</entry>
746
747 <entry>48.2</entry>
748 </row>
749
750 <row>
751 <entry><emphasis role="bold">128</emphasis></entry>
752
753 <entry>8426825</entry>
754
755 <entry>6902646</entry>
756
757 <entry>9977</entry>
758
759 <entry>81.9</entry>
760 </row>
761
762 <row>
763 <entry role="bold"><emphasis
764 role="bold">256</emphasis></entry>
765
766 <entry>4528957</entry>
767
768 <entry>4528912</entry>
769
770 <entry>9999</entry>
771
772 <entry>100</entry>
773 </row>
774 </tbody>
775 </tgroup>
776 </table></para>
777 </section>
778 </section>
779
780 <section id="usecase-six">
781 <title>Use Case - Forward traffic between two VMs</title>
782
783 <para>Benchmark a combo test using two virtual machines, the first
784 with traffic forwarding to the second, which terminates it.</para>
785
786 <para>Measurements are made in:</para>
787
788 <itemizedlist>
789 <listitem>
790 <para>pktgen TX in pps and Mbits/s</para>
791 </listitem>
792
793 <listitem>
794 <para>testpmd TX and RX pps in VM1</para>
795 </listitem>
796
797 <listitem>
798 <para>testpmd RX pps in VM2</para>
799 </listitem>
800
801 <listitem>
802 <para>throughput in percents, by dividing<emphasis role="bold">
803 VM2 testpmd RX pps</emphasis> by <emphasis role="bold">pktgen TX
804 pps</emphasis></para>
805 </listitem>
806 </itemizedlist>
807
808 <section id="targetone-usecase-five">
809 <title>Test Setup for Target 1</title>
810
811 <para>Start by doing the following:</para>
812
813 <para>SSD boot using the following <literal>grub.cfg</literal>
814 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
815isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
816clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
817processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
818intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
819hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
820 Services:<programlisting>killall ovsdb-server ovs-vswitchd
821rm -rf /etc/openvswitch/*
822mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
823mount -t hugetlbfs nodev /mnt/huge
824modprobe igb_uio
825dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
826 pktgen:<programlisting>cd /usr/share/apps/pktgen/
827./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
828-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
829 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
830 </section>
831
832 <section id="targettwo-usecase-six">
833 <title>Test Setup for Target 2</title>
834
835 <para>Start by doing the following:</para>
836
837 <para>SSD boot using the following <literal>grub.cfg</literal>
838 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
839isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
840clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
841processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
842intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
843hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
844 Services:<programlisting>killall ovsdb-server ovs-vswitchd
845rm -rf /etc/openvswitch/*
846mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
847mount -t hugetlbfs nodev /mnt/huge
848modprobe igb_uio
849dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
850 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
851ovsdb-tool create /etc/openvswitch/conf.db /
852/usr/share/openvswitch/vswitch.ovsschema
853ovsdb-server --remote=punix:$DB_SOCK /
854--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
855ovs-vsctl --no-wait init
856ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
857ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
858ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
859ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
860ovs-vswitchd unix:$DB_SOCK --pidfile /
861--detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
862
863
864ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
865ovs-vsctl add-port ovsbr0 dpdk0 /
866-- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1
867ovs-vsctl add-port ovsbr0 vhost-user1 /
868-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=2
869ovs-vsctl add-port ovsbr0 vhost-user2 /
870-- set Interface vhost-user2 type=dpdkvhostuser ofport_request=3
871
872
873ovs-ofctl del-flows ovsbr0
874ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
875ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Launch
876 first QEMU instance, VM1:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M q35 /
877-smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 -enable-kvm /
878-nographic -realtime mlock=on -kernel /home/root/qemu/bzImage /
879-drive file=/home/root/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
880if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,/
881size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
882-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
883-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
884-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
885mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
886guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
887hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
888irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
889processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Connect to
890 Target 2 through a new SSH session and run a second QEMU instance
891 (to get its own console, separate from instance VM1). We shall call
892 this VM2:<programlisting>taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
893-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 /
894-enable-kvm -nographic -realtime mlock=on -kernel /home/root/qemu2/bzImage /
895-drive file=/home/root/qemu2/enea-image-virtualization-guest-qemux86-64.ext4,/
896if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,size=2048M,/
897mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc /
898-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 /
899-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce /
900-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,/
901mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
902guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
903hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
904irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
905processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Configure DPDK
906 inside VM1:<programlisting>mkdir -p /mnt/huge
907mount -t hugetlbfs nodev /mnt/huge
908modprobe igb_uio
909dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
910 VM1:<programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
911-- --burst 64 --disable-hw-vlan --disable-rss -i /
912--portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
913--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Start
914 testpmd inside VM1:<programlisting>start</programlisting>Configure
915 DPDK inside VM2:<programlisting>mkdir -p /mnt/huge
916mount -t hugetlbfs nodev /mnt/huge
917modprobe igb_uio
918dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
919 VM2:<programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
920-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
921--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 /
922--rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Set VM2 for
923 termination and start testpmd:<programlisting>set fwd rxonly
924start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use
925 this command to refresh testpmd display in VM1 and VM2 and note the
926 highest values:<programlisting>show port stats 0</programlisting>To
927 stop traffic from pktgen, in order to choose a different frame
928 size:<programlisting>stop 0</programlisting>To clear numbers in
929 testpmd:<programlisting>clear port stats
930show port stats 0</programlisting>For VM1, we record the stats relevant for
931 <emphasis role="bold">forwarding</emphasis>:</para>
932
933 <itemizedlist>
934 <listitem>
935 <para>RX, TX in pps</para>
936 </listitem>
937 </itemizedlist>
938
939 <para>Only Rx-pps and Tx-pps numbers are important here, they change
940 every time stats are displayed as long as there is traffic. Run the
941 command a few times and pick the best (maximum) values seen.</para>
942
943 <para>For VM2, we record the stats relevant for <emphasis
944 role="bold">termination</emphasis>:</para>
945
946 <itemizedlist>
947 <listitem>
948 <para>RX in pps (TX will be 0)</para>
949 </listitem>
950 </itemizedlist>
951
952 <para>For pktgen, we record only the TX side, because flow is
953 terminated, with no RX traffic reaching pktgen:</para>
954
955 <itemizedlist>
956 <listitem>
957 <para>TX in pps and Mbit/s</para>
958 </listitem>
959 </itemizedlist>
960
961 <table>
962 <title>Results in forwarding mode</title>
963
964 <tgroup cols="7">
965 <tbody>
966 <row>
967 <entry align="center"><emphasis
968 role="bold">Bytes</emphasis></entry>
969
970 <entry align="center"><emphasis role="bold">pktgen pps
971 TX</emphasis></entry>
972
973 <entry align="center"><emphasis role="bold">VM1 testpmd pps
974 RX</emphasis></entry>
975
976 <entry align="center"><emphasis role="bold">VM1 testpmd pps
977 TX</emphasis></entry>
978
979 <entry align="center"><emphasis role="bold">VM2 testpmd pps
980 RX</emphasis></entry>
981
982 <entry align="center"><emphasis role="bold">pktgen MBits/s
983 TX</emphasis></entry>
984
985 <entry align="center"><emphasis role="bold">throughput
986 (%)</emphasis></entry>
987 </row>
988
989 <row>
990 <entry role="bold"><emphasis
991 role="bold">64</emphasis></entry>
992
993 <entry>14845113</entry>
994
995 <entry>6826540</entry>
996
997 <entry>5389680</entry>
998
999 <entry>5383577</entry>
1000
1001 <entry>9975</entry>
1002
1003 <entry>36.2</entry>
1004 </row>
1005
1006 <row>
1007 <entry><emphasis role="bold">128</emphasis></entry>
1008
1009 <entry>8426683</entry>
1010
1011 <entry>6825857</entry>
1012
1013 <entry>5386971</entry>
1014
1015 <entry>5384530</entry>
1016
1017 <entry>9976</entry>
1018
1019 <entry>63.9</entry>
1020 </row>
1021
1022 <row>
1023 <entry role="bold"><emphasis
1024 role="bold">256</emphasis></entry>
1025
1026 <entry>4528894</entry>
1027
1028 <entry>4507975</entry>
1029
1030 <entry>4507958</entry>
1031
1032 <entry>4532457</entry>
1033
1034 <entry>9999</entry>
1035
1036 <entry>100</entry>
1037 </row>
1038 </tbody>
1039 </tgroup>
1040 </table>
1041 </section>
1042 </section>
1043 </section>
1044 </section>
14</chapter> \ No newline at end of file 1045</chapter> \ No newline at end of file