summaryrefslogtreecommitdiffstats
path: root/doc/book-enea-nfv-access-guide/doc/benchmarks.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book-enea-nfv-access-guide/doc/benchmarks.xml')
-rw-r--r--doc/book-enea-nfv-access-guide/doc/benchmarks.xml1474
1 files changed, 0 insertions, 1474 deletions
diff --git a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml
deleted file mode 100644
index 34748b8..0000000
--- a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml
+++ /dev/null
@@ -1,1474 +0,0 @@
1<?xml version="1.0" encoding="UTF-8"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="benchmarks">
5 <title>Benchmarks</title>
6
7 <section id="hw-setup">
8 <title>Hardware Setup</title>
9
10 <para>The following table describes all the needed prequisites for an apt
11 hardware setup:</para>
12
13 <table>
14 <title>Hardware Setup</title>
15
16 <tgroup cols="2">
17 <colspec align="left" />
18
19 <thead>
20 <row>
21 <entry align="center">Item</entry>
22
23 <entry align="center">Description</entry>
24 </row>
25 </thead>
26
27 <tbody>
28 <row>
29 <entry align="left">Server Platform</entry>
30
31 <entry align="left">Cavium CN8304</entry>
32 </row>
33
34 <row>
35 <entry align="left">ARCH</entry>
36
37 <entry>aarch64</entry>
38 </row>
39
40 <row>
41 <entry align="left">Processor</entry>
42
43 <entry>Cavium OcteonTX CN83XX</entry>
44 </row>
45
46 <row>
47 <entry align="left">CPU freq</entry>
48
49 <entry>1.8 GHz</entry>
50 </row>
51
52 <row>
53 <entry align="left">RAM</entry>
54
55 <entry>16GB</entry>
56 </row>
57
58 <row>
59 <entry align="left">Network</entry>
60
61 <entry>2x10G ports</entry>
62 </row>
63
64 <row>
65 <entry align="left">Storage</entry>
66
67 <entry>Seagate 500GB HDD</entry>
68 </row>
69 </tbody>
70 </tgroup>
71 </table>
72
73 <para>Generic tests configuration:</para>
74
75 <itemizedlist>
76 <listitem>
77 <para>All tests use one port, one core and one Rx/TX queue for fast
78 path traffic.</para>
79 </listitem>
80 </itemizedlist>
81 </section>
82
83 <section id="use-cases">
84 <title>Use Cases</title>
85
86 <section id="docker-benchmarks">
87 <title>Docker related benchmarks</title>
88
89 <section>
90 <title>Forward traffic in Docker</title>
91
92 <para>Benchmarking traffic forwarding using the testpmd application in
93 a Docker container.</para>
94
95 <para>Pktgen is used to generate UDP traffic that will reach testpmd,
96 running in a Docker image. It will then be forwarded back to source on
97 the return trip (<emphasis role="bold">Forwarding</emphasis>).</para>
98
99 <para>This test measures:</para>
100
101 <itemizedlist>
102 <listitem>
103 <para>pktgen TX, RX in packets per second (pps) and MBps</para>
104 </listitem>
105
106 <listitem>
107 <para>testpmd TX, RX in packets per second (pps)</para>
108 </listitem>
109
110 <listitem>
111 <para>divide testpmd RX / pktgen TX in pps to obtain throughput in
112 percentages (%)</para>
113 </listitem>
114 </itemizedlist>
115
116 <section id="usecase-one">
117 <title>Test Setup for Target 1</title>
118
119 <para>Start by following the steps below:</para>
120
121 <para>Boot the NFV Access Linux using the following kernel
122 parameters in U-Boot: <programlisting>&gt; setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
123rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
124nosoftlockup audit=0'</programlisting></para>
125
126 <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 &gt; /proc/sys/vm/nr_hugepages
127modprobe vfio-pci
128ifconfig enP1p1s0f1 down
129dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run
130 pktgen:<programlisting>cd /usr/share/apps/pktgen/
131./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \
132-w 0001:01:00.1 -- -P -m "[1:2].0"</programlisting>In the pktgen console
133 run:<programlisting>str</programlisting>Choose one of the values
134 from [64, 128, 256, 512] to change the packet size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
135 </section>
136
137 <section id="usecase-two">
138 <title>Test Setup for Target 2</title>
139
140 <para>Start by following the steps below:</para>
141
142 <para>Boot the NFV Access Linux using the following kernel
143 parameters in U-Boot:</para>
144
145 <programlisting>&gt; setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
146rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
147nosoftlockup audit=0'</programlisting>
148
149 <para>It is expected that a NFV Access guest image is present on the
150 target.</para>
151
152 <para>Set up the DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config
153killall ovsdb-server ovs-vswitchd
154rm -rf /etc/openvswitch/*
155rm -rf /var/run/openvswitch/*
156rm -rf /var/log/openvswitch/*
157mkdir -p /var/run/openvswitch
158
159# Configure hugepages and bind interfaces to dpdk
160echo 20 &gt; /proc/sys/vm/nr_hugepages
161modprobe vfio-pci
162ifconfig enP1p1s0f1 down
163dpdk-devbind --b vfio-pci 0001:01:00.1
164
165# configure openvswitch with DPDK
166export DB_SOCK=/var/run/openvswitch/db.sock
167ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
168ovsdb-server --remote=punix:$DB_SOCK \
169 --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
170ovs-vsctl --no-wait init
171ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
172ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
173ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
174ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
175ovs-vswitchd unix:$DB_SOCK --pidfile --detach \
176 --log-file=/var/log/openvswitch/ovs-vswitchd.log
177
178ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
179ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 \
180 type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=2
181ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \
182 options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1
183
184# configure static flows
185ovs-ofctl del-flows ovsbr0
186ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
187ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
188 Docker container:<programlisting>docker import enea-nfv-access-guest-qemuarm64.tar.gz nfv_container</programlisting>Start
189 the Docker container:<programlisting>docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \
190 -v /dev/hugepages/:/dev/hugepages/ -p nfv_container /bin/bash</programlisting>Start
191 the testpmd application in Docker:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 \
192 --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 \
193 -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \
194 --disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 --rxq=1 \
195 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>To
196 start traffic <emphasis role="bold">forwarding</emphasis>, run the
197 following command in testpmd CLI:<programlisting>start</programlisting>To
198 start traffic but in <emphasis role="bold">termination</emphasis>
199 mode (no traffic sent on TX), run the following commands in testpmd
200 CLI:<programlisting>set fwd rxonly
201start</programlisting><table>
202 <title>Results in forwarding mode</title>
203
204 <tgroup cols="8">
205 <tbody>
206 <row>
207 <entry align="center"><emphasis
208 role="bold">Bytes</emphasis></entry>
209
210 <entry align="center"><emphasis role="bold">pktgen pps
211 TX</emphasis></entry>
212
213 <entry align="center"><emphasis role="bold">pktgen MBits/s
214 TX</emphasis></entry>
215
216 <entry align="center"><emphasis role="bold">pktgen pps
217 RX</emphasis></entry>
218
219 <entry align="center"><emphasis role="bold">pktgen MBits/s
220 RX</emphasis></entry>
221
222 <entry align="center"><emphasis role="bold">testpmd pps
223 RX</emphasis></entry>
224
225 <entry align="center"><emphasis role="bold">testpmd pps
226 TX</emphasis></entry>
227
228 <entry align="center"><emphasis role="bold">throughput
229 (%)</emphasis></entry>
230 </row>
231
232 <row>
233 <entry role="bold"><emphasis
234 role="bold">64</emphasis></entry>
235
236 <entry>14682363</entry>
237
238 <entry>9867</entry>
239
240 <entry>1666666</entry>
241
242 <entry>1119</entry>
243
244 <entry>1976488</entry>
245
246 <entry>1666694</entry>
247
248 <entry>13.46%</entry>
249 </row>
250
251 <row>
252 <entry><emphasis role="bold">128</emphasis></entry>
253
254 <entry>8445993</entry>
255
256 <entry>10000</entry>
257
258 <entry>1600567</entry>
259
260 <entry>1894</entry>
261
262 <entry>1886851</entry>
263
264 <entry>1600573</entry>
265
266 <entry>22.34%</entry>
267 </row>
268
269 <row>
270 <entry role="bold"><emphasis
271 role="bold">256</emphasis></entry>
272
273 <entry>4529011</entry>
274
275 <entry>10000</entry>
276
277 <entry>1491449</entry>
278
279 <entry>3292</entry>
280
281 <entry>1715763</entry>
282
283 <entry>1491445</entry>
284
285 <entry>37.88%</entry>
286 </row>
287
288 <row>
289 <entry><emphasis role="bold">512</emphasis></entry>
290
291 <entry>2349638</entry>
292
293 <entry>10000</entry>
294
295 <entry>1422338</entry>
296
297 <entry>6052</entry>
298
299 <entry>1555351</entry>
300
301 <entry>1422330</entry>
302
303 <entry>66.20%</entry>
304 </row>
305
306 <row>
307 <entry><emphasis role="bold">1024</emphasis></entry>
308
309 <entry>1197323</entry>
310
311 <entry>10000</entry>
312
313 <entry>1197325</entry>
314
315 <entry>9999</entry>
316
317 <entry>1197320</entry>
318
319 <entry>1197320</entry>
320
321 <entry>100.00%</entry>
322 </row>
323 </tbody>
324 </tgroup>
325 </table><table>
326 <title>Results in termination mode</title>
327
328 <tgroup cols="4">
329 <tbody>
330 <row>
331 <entry align="center"><emphasis
332 role="bold">Bytes</emphasis></entry>
333
334 <entry align="center"><emphasis role="bold">pktgen pps
335 TX</emphasis></entry>
336
337 <entry align="center"><emphasis role="bold">testpmd pps
338 RX</emphasis></entry>
339
340 <entry align="center"><emphasis role="bold">throughput
341 (%)</emphasis></entry>
342 </row>
343
344 <row>
345 <entry role="bold"><emphasis
346 role="bold">64</emphasis></entry>
347
348 <entry>14676922</entry>
349
350 <entry>1984693</entry>
351
352 <entry>13.52%</entry>
353 </row>
354
355 <row>
356 <entry><emphasis role="bold">128</emphasis></entry>
357
358 <entry>8445991</entry>
359
360 <entry>1895099</entry>
361
362 <entry>22.44%</entry>
363 </row>
364
365 <row>
366 <entry role="bold"><emphasis
367 role="bold">256</emphasis></entry>
368
369 <entry>4528379</entry>
370
371 <entry>1722004</entry>
372
373 <entry>38.03%</entry>
374 </row>
375
376 <row>
377 <entry><emphasis role="bold">512</emphasis></entry>
378
379 <entry>2349639</entry>
380
381 <entry>1560988</entry>
382
383 <entry>66.44%</entry>
384 </row>
385
386 <row>
387 <entry><emphasis role="bold">1024</emphasis></entry>
388
389 <entry>1197325</entry>
390
391 <entry>1197325</entry>
392
393 <entry>100.00%</entry>
394 </row>
395 </tbody>
396 </tgroup>
397 </table></para>
398 </section>
399 </section>
400
401 <section id="usecase-three-four">
402 <title>Forward traffic from Docker to another Docker on the same
403 host</title>
404
405 <para>Benchmark a combo test using testpmd running in two Docker
406 instances, one which forwards traffic to the second one, which
407 terminates it.</para>
408
409 <para>Packets are generated with pktgen and transmitted to the first
410 testpmd instance, which will forward them to the second testpmd
411 instance, which terminates them.</para>
412
413 <para>This test measures:</para>
414
415 <itemizedlist>
416 <listitem>
417 <para>pktgen TX, RX in packets per second (pps) and MBps</para>
418 </listitem>
419
420 <listitem>
421 <para>testpmd TX, RX in packets per second in the first Docker
422 container</para>
423 </listitem>
424
425 <listitem>
426 <para>testpmd TX, RX in packets per second in the second Docker
427 container</para>
428 </listitem>
429
430 <listitem>
431 <para>divide testpmd RX pps for the second Docker container to
432 pktgen TX pps to obtain throughput in percentages (%)</para>
433 </listitem>
434 </itemizedlist>
435
436 <section id="target-one-usecase-three">
437 <title>Test Setup for Target 1</title>
438
439 <para>Start by following the steps below:</para>
440
441 <para>Boot the NFV Access Linux using the following kernel
442 parameters in U-Boot:</para>
443
444 <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
445rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
446nosoftlockup audit=0'</programlisting>
447
448 <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 &gt; /proc/sys/vm/nr_hugepages
449modprobe vfio-pci
450ifconfig enP1p1s0f1 down
451dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run
452 pktgen:<programlisting>cd /usr/share/apps/pktgen/
453./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \
454-w 0001:01:00.1 -- -P -m "[1:2].0"</programlisting>Choose one of the values
455 from [64, 128, 256, 512] to change the packet size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
456 </section>
457
458 <section id="target-two-usecase-four">
459 <title>Test Setup for Target 2</title>
460
461 <para>Start by following the steps below:</para>
462
463 <para>Boot the NFV Access Linux using the following kernel
464 parameters in U-Boot:</para>
465
466 <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
467rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
468nosoftlockup audit=0'</programlisting>
469
470 <para>Set up the DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config
471killall ovsdb-server ovs-vswitchd
472rm -rf /etc/openvswitch/*
473rm -rf /var/run/openvswitch/*
474rm -rf /var/log/openvswitch/*
475mkdir -p /var/run/openvswitch
476
477# Configure hugepages and bind interfaces to dpdk
478echo 20 &gt; /proc/sys/vm/nr_hugepages
479modprobe vfio-pci
480ifconfig enP1p1s0f1 down
481dpdk-devbind --b vfio-pci 0001:01:00.1
482
483# configure openvswitch with DPDK
484export DB_SOCK=/var/run/openvswitch/db.sock
485ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
486ovsdb-server --remote=punix:$DB_SOCK \
487 --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
488ovs-vsctl --no-wait init
489ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
490ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
491ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
492ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
493ovs-vswitchd unix:$DB_SOCK --pidfile --detach \
494 --log-file=/var/log/openvswitch/ovs-vswitchd.log
495
496ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
497ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 \
498 type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=1
499ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 \
500 type=dpdkvhostuser -- set Interface vhost-user2 ofport_request=2
501ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \
502 options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=3
503
504# configure static flows
505ovs-ofctl del-flows ovsbr0
506ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2
507ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
508 Docker container:<programlisting>docker import enea-nfv-access-guest-qemuarm64.tar.gz nfv_container</programlisting>Start
509 the first Docker container:<programlisting>docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \
510 -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bash</programlisting>Start
511 testpmd in the first Docker container:<programlisting>testpmd -c 0x300 -n 4 --file-prefix prog2 --socket-mem 512 \
512 --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 \
513 -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \
514 --disable-rss -i --portmask=0x1 --coremask=0x200 --nb-cores=1 --rxq=1 \
515 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Configure
516 it in termination mode:<programlisting>set fwd rxonly</programlisting>Run
517 the testpmd application:<programlisting>start</programlisting>Open a
518 new console to the host and start the second Docker
519 instance:<programlisting>docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \
520 -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bash</programlisting>In
521 the second container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci \
522--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \
523-d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlan</programlisting>Start
524 testpmd in the second Docker container:<programlisting>testpmd -c 0x30 -n 4 --file-prefix prog1 --socket-mem 512 \
525 --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \
526 -d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan \
527 --disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 --rxq=1 \
528 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>In
529 the testpmd shell, run:<programlisting>start</programlisting>Start
530 pktgen traffic by running the following command in pktgen
531 CLI:<programlisting>start 0</programlisting>To record traffic
532 results, run:<programlisting>show port stats 0</programlisting></para>
533
534 <table>
535 <title>Results</title>
536
537 <tgroup cols="6">
538 <tbody>
539 <row>
540 <entry align="center"><emphasis
541 role="bold">Bytes</emphasis></entry>
542
543 <entry align="center"><emphasis role="bold">Target 1 -
544 pktgen pps TX</emphasis></entry>
545
546 <entry align="center"><emphasis role="bold">Target 2 -
547 (forwarding) testpmd pps RX</emphasis></entry>
548
549 <entry align="center"><emphasis role="bold">Target 2 -
550 (forwarding) testpmd pps TX</emphasis></entry>
551
552 <entry align="center"><emphasis role="bold">Target 2 -
553 (termination) testpmd pps RX</emphasis></entry>
554
555 <entry align="center"><emphasis role="bold">Throughput
556 (%)</emphasis></entry>
557 </row>
558
559 <row>
560 <entry role="bold"><emphasis
561 role="bold">64</emphasis></entry>
562
563 <entry>14683140</entry>
564
565 <entry>1979807</entry>
566
567 <entry>1366712</entry>
568
569 <entry>1366690</entry>
570
571 <entry>9.31%</entry>
572 </row>
573
574 <row>
575 <entry><emphasis role="bold">128</emphasis></entry>
576
577 <entry>8446005</entry>
578
579 <entry>1893514</entry>
580
581 <entry>1286628</entry>
582
583 <entry>1286621</entry>
584
585 <entry>15.23%</entry>
586 </row>
587
588 <row>
589 <entry role="bold"><emphasis
590 role="bold">256</emphasis></entry>
591
592 <entry>4529011</entry>
593
594 <entry>1716427</entry>
595
596 <entry>1140234</entry>
597
598 <entry>1140232</entry>
599
600 <entry>25.18%</entry>
601 </row>
602
603 <row>
604 <entry><emphasis role="bold">512</emphasis></entry>
605
606 <entry>2349638</entry>
607
608 <entry>1556898</entry>
609
610 <entry>1016661</entry>
611
612 <entry>1016659</entry>
613
614 <entry>43.27%</entry>
615 </row>
616
617 <row>
618 <entry><emphasis role="bold">1024</emphasis></entry>
619
620 <entry>1197326</entry>
621
622 <entry>1197319</entry>
623
624 <entry>869654</entry>
625
626 <entry>869652</entry>
627
628 <entry>72.63%</entry>
629 </row>
630
631 <row>
632 <entry><emphasis role="bold">1500</emphasis></entry>
633
634 <entry>822373</entry>
635
636 <entry>822369</entry>
637
638 <entry>760826</entry>
639
640 <entry>760821</entry>
641
642 <entry>92.52%</entry>
643 </row>
644 </tbody>
645 </tgroup>
646 </table>
647 </section>
648 </section>
649 </section>
650
651 <section id="vm-benchmarks">
652 <title>VM related benchmarks</title>
653
654 <section id="usecase-four">
655 <title>Forward/termination traffic in one VM</title>
656
657 <para>Benchmarking traffic (UDP) forwarding and termination using
658 testpmd in a virtual machine.</para>
659
660 <para>The pktgen application is used to generate traffic that will
661 reach testpmd running in a virtual machine, from where it will be
662 forwarded back to source. Within the same setup, a second measurement
663 will be done with traffic termination in the virtual machine.</para>
664
665 <para>This test case measures:</para>
666
667 <itemizedlist>
668 <listitem>
669 <para>pktgen TX, RX in packets per second (pps) and MBps</para>
670 </listitem>
671
672 <listitem>
673 <para>testpmd TX, RX in packets per second (pps)</para>
674 </listitem>
675
676 <listitem>
677 <para>divide <emphasis role="bold">testpmd RX</emphasis> by
678 <emphasis role="bold">pktgen TX</emphasis> in pps to obtain the
679 throughput in percentages (%)</para>
680 </listitem>
681 </itemizedlist>
682
683 <section id="targetone-usecasefour">
684 <title>Test Setup for Target 1</title>
685
686 <para>Start with the steps below:</para>
687
688 <para>Boot the NFV Access Linux using the following kernel
689 parameters in U-Boot: <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
690rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
691nosoftlockup audit=0'</programlisting></para>
692
693 <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 &gt; /proc/sys/vm/nr_hugepages
694modprobe vfio-pci
695ifconfig enP1p1s0f1 down
696dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run
697 pktgen:<programlisting>cd /usr/share/apps/pktgen/
698./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \
699-w 0001:01:00.1 -- -P -m "[1:2].0"</programlisting>Choose one of the values
700 from [64, 128, 256, 512] to change the packet size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
701 </section>
702
703 <section id="targettwo-usecasefive">
704 <title>Test Setup for Target 2</title>
705
706 <para>Start by following the steps below:</para>
707
708 <para>Boot the NFV Access Linux using the following kernel
709 parameters in U-Boot: <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
710rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
711nosoftlockup audit=0'</programlisting>Kill unnecessary services:
712 <programlisting>killall ovsdb-server ovs-vswitchd
713rm -rf /etc/openvswitch/*
714rm -rf /var/run/openvswitch/*
715mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up the
716 DPDK:<programlisting>echo 20 &gt; /proc/sys/vm/nr_hugepages
717modprobe vfio-pci
718dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure
719 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
720ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
721ovsdb-server --remote=punix:$DB_SOCK \
722 --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
723ovs-vsctl --no-wait init
724ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
725ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
726ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
727ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
728ovs-vswitchd unix:$DB_SOCK --pidfile --detach \
729 --log-file=/var/log/openvswitch/ovs-vswitchd.log
730
731ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 \
732 datapath_type=netdev
733ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface \
734 vhost-user1 type=dpdkvhostuser -- set Interface vhost-user1 ofport_request=2
735ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 \
736 type=dpdk options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1
737
738ovs-ofctl del-flows ovsbr0
739ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
740ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Create an
741 XML file with the content below (e.g.
742 <filename>/home/root/guest.xml</filename>):<programlisting>&lt;domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
743 &lt;name&gt;nfv-ovs-vm&lt;/name&gt;
744 &lt;uuid&gt;ed204646-1ad5-11e7-93ae-92361f002671&lt;/uuid&gt;
745 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
746 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
747
748 &lt;memoryBacking&gt;
749 &lt;hugepages&gt;
750 &lt;page size='512' unit='M' nodeset='0'/&gt;
751 &lt;/hugepages&gt;
752 &lt;/memoryBacking&gt;
753
754 &lt;os&gt;
755 &lt;type arch='aarch64' machine='virt,gic_version=3'&gt;hvm&lt;/type&gt;
756 &lt;kernel&gt;Image&lt;/kernel&gt;
757 &lt;cmdline&gt;root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \
758 debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \
759 irqaffinity=0&lt;/cmdline&gt;
760 &lt;boot dev='hd'/&gt;
761 &lt;/os&gt;
762
763 &lt;features&gt;
764 &lt;acpi/&gt;
765 &lt;apic/&gt;
766 &lt;/features&gt;
767
768 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
769
770 &lt;cpu mode='host-model'&gt;
771 &lt;model fallback='allow'/&gt;
772 &lt;topology sockets='1' cores='2' threads='1'/&gt;
773 &lt;numa&gt;
774 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
775 &lt;/numa&gt;
776 &lt;/cpu&gt;
777
778 &lt;cputune&gt;
779 &lt;vcpupin vcpu="0" cpuset="4"/&gt;
780 &lt;vcpupin vcpu="1" cpuset="5"/&gt;
781 &lt;/cputune&gt;
782
783 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
784 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
785 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
786
787 &lt;devices&gt;
788 &lt;emulator&gt;/usr/bin/qemu-system-aarch64&lt;/emulator&gt;
789 &lt;disk type='file' device='disk'&gt;
790 &lt;driver name='qemu' type='raw' cache='none'/&gt;
791 &lt;source file='enea-nfv-access-guest-qemuarm64.ext4'/&gt;
792 &lt;target dev='vda' bus='virtio'/&gt;
793 &lt;/disk&gt;
794
795 &lt;serial type='pty'&gt;
796 &lt;target port='0'/&gt;
797 &lt;/serial&gt;
798
799 &lt;console type='pty'&gt;
800 &lt;target type='serial' port='0'/&gt;
801 &lt;/console&gt;
802 &lt;/devices&gt;
803
804 &lt;qemu:commandline&gt;
805 &lt;qemu:arg value='-chardev'/&gt;
806 &lt;qemu:arg value='socket,id=charnet0,path=/var/run/openvswitch/vhost-user1'/&gt;
807
808 &lt;qemu:arg value='-netdev'/&gt;
809 &lt;qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/&gt;
810
811 &lt;qemu:arg value='-device'/&gt;
812 &lt;qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,
813 bus=pcie.0,addr=0x2'/&gt;
814 &lt;/qemu:commandline&gt;
815&lt;/domain&gt;</programlisting></para>
816
817 <para>Start the virtual machine by running:</para>
818
819 <para><programlisting>virsh create /home/root/guest.xml</programlisting></para>
820
821 <para>Connect to the virtual machine console:</para>
822
823 <para><programlisting>virsh console nfv-ovs-vm</programlisting></para>
824
825 <para>Inside the VM, configure the DPDK: <programlisting>ifconfig enp0s2 down
826echo 1 &gt; /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
827modprobe vfio-pci
828dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Inside the VM, start
829 testpmd: <programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \
830-w 0000:00:02.0 -- -i --disable-hw-vlan-filter --no-flush-rx \
831--port-topology=chained</programlisting>For the <emphasis
832 role="bold">Forwarding test</emphasis>, run:<programlisting>set fwd io
833start</programlisting>For the <emphasis role="bold">Termination
834 test</emphasis>, set testpmd to only receive, then start
835 it:<programlisting>set fwd rxonly
836start</programlisting>On target 1, you may start pktgen traffic
837 now:<programlisting>start 0</programlisting>On target 2, use this
838 command to refresh the testpmd display traffic
839 statistics:<programlisting>show port stats 0</programlisting>To stop
840 generating traffic in order to choose a different frame size,
841 run:<programlisting>stop 0</programlisting>To clear numbers in
842 testpmd:<programlisting>clear port stats
843show port stats 0</programlisting><table>
844 <title>Results in forwarding mode</title>
845
846 <tgroup cols="8">
847 <tbody>
848 <row>
849 <entry align="center"><emphasis
850 role="bold">Bytes</emphasis></entry>
851
852 <entry align="center"><emphasis role="bold">pktgen pps
853 RX</emphasis></entry>
854
855 <entry align="center"><emphasis role="bold">pktgen pps
856 TX</emphasis></entry>
857
858 <entry align="center"><emphasis role="bold">testpmd pps
859 RX</emphasis></entry>
860
861 <entry align="center"><emphasis role="bold">testpmd pps
862 TX</emphasis></entry>
863
864 <entry align="center"><emphasis role="bold">pktgen MBits/s
865 RX</emphasis></entry>
866
867 <entry align="center"><emphasis role="bold">pktgen MBits/s
868 TX</emphasis></entry>
869
870 <entry align="center"><emphasis role="bold">throughput
871 (%)</emphasis></entry>
872 </row>
873
874 <row>
875 <entry role="bold"><emphasis
876 role="bold">64</emphasis></entry>
877
878 <entry>1555163</entry>
879
880 <entry>14686542</entry>
881
882 <entry>1978791</entry>
883
884 <entry>1554707</entry>
885
886 <entry>1044</entry>
887
888 <entry>9867</entry>
889
890 <entry>13.47%</entry>
891 </row>
892
893 <row>
894 <entry><emphasis role="bold">128</emphasis></entry>
895
896 <entry>1504275</entry>
897
898 <entry>8447999</entry>
899
900 <entry>1901468</entry>
901
902 <entry>1504266</entry>
903
904 <entry>1781</entry>
905
906 <entry>10000</entry>
907
908 <entry>22.51%</entry>
909 </row>
910
911 <row>
912 <entry role="bold"><emphasis
913 role="bold">256</emphasis></entry>
914
915 <entry>1423564</entry>
916
917 <entry>4529012</entry>
918
919 <entry>1718299</entry>
920
921 <entry>1423553</entry>
922
923 <entry>3142</entry>
924
925 <entry>10000</entry>
926
927 <entry>37.94%</entry>
928 </row>
929
930 <row>
931 <entry><emphasis role="bold">512</emphasis></entry>
932
933 <entry>1360379</entry>
934
935 <entry>2349636</entry>
936
937 <entry>1554844</entry>
938
939 <entry>1360456</entry>
940
941 <entry>5789</entry>
942
943 <entry>10000</entry>
944
945 <entry>66.17%</entry>
946 </row>
947
948 <row>
949 <entry><emphasis role="bold">1024</emphasis></entry>
950
951 <entry>1197327</entry>
952
953 <entry>1197329</entry>
954
955 <entry>1197319</entry>
956
957 <entry>1197329</entry>
958
959 <entry>9999</entry>
960
961 <entry>10000</entry>
962
963 <entry>100.00%</entry>
964 </row>
965 </tbody>
966 </tgroup>
967 </table><table>
968 <title>Results in termination mode</title>
969
970 <tgroup cols="5">
971 <tbody>
972 <row>
973 <entry align="center"><emphasis
974 role="bold">Bytes</emphasis></entry>
975
976 <entry align="center"><emphasis role="bold">pktgen pps
977 TX</emphasis></entry>
978
979 <entry align="center"><emphasis role="bold">testpmd pps
980 RX</emphasis></entry>
981
982 <entry align="center"><emphasis role="bold">pktgen MBits/s
983 TX</emphasis></entry>
984
985 <entry align="center"><emphasis role="bold">throughput
986 (%)</emphasis></entry>
987 </row>
988
989 <row>
990 <entry role="bold"><emphasis
991 role="bold">64</emphasis></entry>
992
993 <entry>14695621</entry>
994
995 <entry>1983227</entry>
996
997 <entry>9875</entry>
998
999 <entry>13.50%</entry>
1000 </row>
1001
1002 <row>
1003 <entry><emphasis role="bold">128</emphasis></entry>
1004
1005 <entry>8446022</entry>
1006
1007 <entry>1897546</entry>
1008
1009 <entry>10000</entry>
1010
1011 <entry>22.47%</entry>
1012 </row>
1013
1014 <row>
1015 <entry><emphasis role="bold">256</emphasis></entry>
1016
1017 <entry>4529011</entry>
1018
1019 <entry>1724323</entry>
1020
1021 <entry>10000</entry>
1022
1023 <entry>38.07%</entry>
1024 </row>
1025
1026 <row>
1027 <entry><emphasis role="bold">512</emphasis></entry>
1028
1029 <entry>2349638</entry>
1030
1031 <entry>1562212</entry>
1032
1033 <entry>10000</entry>
1034
1035 <entry>66.49%</entry>
1036 </row>
1037
1038 <row>
1039 <entry role="bold"><emphasis
1040 role="bold">1024</emphasis></entry>
1041
1042 <entry>1197323</entry>
1043
1044 <entry>1197324</entry>
1045
1046 <entry>10000</entry>
1047
1048 <entry>100.00%</entry>
1049 </row>
1050 </tbody>
1051 </tgroup>
1052 </table></para>
1053 </section>
1054 </section>
1055
1056 <section id="usecase-six">
1057 <title>Forward traffic between two VMs</title>
1058
1059 <para>Benchmark a combo test using two virtual machines, the first
1060 with traffic forwarding to the second, which terminates it.</para>
1061
1062 <para>Measurements are made in:</para>
1063
1064 <itemizedlist>
1065 <listitem>
1066 <para>pktgen TX in pps and MBps</para>
1067 </listitem>
1068
1069 <listitem>
1070 <para>testpmd TX and RX pps in VM1</para>
1071 </listitem>
1072
1073 <listitem>
1074 <para>testpmd RX pps in VM2</para>
1075 </listitem>
1076
1077 <listitem>
1078 <para>divide<emphasis role="bold"> VM2 testpmd RX pps</emphasis>
1079 by <emphasis role="bold">pktgen TX pps </emphasis>to obtain the
1080 throughput in percentages (%)</para>
1081 </listitem>
1082 </itemizedlist>
1083
1084 <section id="targetone-usecase-five">
1085 <title>Test Setup for Target 1</title>
1086
1087 <para>Start by doing the following:</para>
1088
1089 <para>Boot the NFV Access Linux using the following kernel
1090 parameters in U-Boot: <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
1091rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
1092nosoftlockup audit=0'</programlisting>Configure hugepages and set up the
1093 DPDK:<programlisting>echo 4 &gt; /proc/sys/vm/nr_hugepages
1094modprobe vfio-pci
1095ifconfig enP1p1s0f1 down
1096dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run
1097 pktgen:<programlisting>cd /usr/share/apps/pktgen/
1098./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 \
1099-w 0001:01:00.1 -- -P -m "[1:2].0"</programlisting>Choose one of the values
1100 from [64, 128, 256, 512] to change the packet size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
1101 </section>
1102
1103 <section id="targettwo-usecase-six">
1104 <title>Test Setup for Target 2</title>
1105
1106 <para>Start by doing the following:</para>
1107
1108 <para>Boot the NFV Access Linux using the following kernel
1109 parameters in U-Boot: <programlisting>setenv bootargs 'nohz_full=1-23 isolcpus=1-23 \
1110rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \
1111nosoftlockup audit=0'</programlisting>Kill Services:<programlisting>killall ovsdb-server ovs-vswitchd
1112rm -rf /etc/openvswitch/*
1113mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up the
1114 DPDK:<programlisting>echo 20 &gt; /proc/sys/vm/nr_hugepages
1115modprobe vfio-pci
1116dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure the
1117 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
1118ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
1119ovsdb-server --remote=punix:$DB_SOCK \
1120 --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
1121ovs-vsctl --no-wait init
1122ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
1123ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
1124ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
1125ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
1126ovs-vswitchd unix:$DB_SOCK --pidfile --detach \
1127 --log-file=/var/log/openvswitch/ovs-vswitchd.log
1128
1129ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
1130ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \
1131 options:dpdk-devargs=0001:01:00.1 -- set Interface dpdk0 ofport_request=1
1132ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser \
1133 -- set Interface vhost-user1 ofport_request=2
1134ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser \
1135 -- set Interface vhost-user2 ofport_request=3
1136
1137ovs-ofctl del-flows ovsbr0
1138ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
1139ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Create an
1140 XML with the content below and then run <command>virsh create
1141 &lt;XML_FILE&gt;</command><programlisting>&lt;domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
1142 &lt;name&gt;nfv-ovs-vm1&lt;/name&gt;
1143 &lt;uuid&gt;ed204646-1ad5-11e7-93ae-92361f002671&lt;/uuid&gt;
1144 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
1145 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
1146
1147 &lt;memoryBacking&gt;
1148 &lt;hugepages&gt;
1149 &lt;page size='512' unit='M' nodeset='0'/&gt;
1150 &lt;/hugepages&gt;
1151 &lt;/memoryBacking&gt;
1152
1153 &lt;os&gt;
1154 &lt;type arch='aarch64' machine='virt,gic_version=3'&gt;hvm&lt;/type&gt;
1155 &lt;kernel&gt;Image&lt;/kernel&gt;
1156 &lt;cmdline&gt;root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \
1157 debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \
1158 irqaffinity=0&lt;/cmdline&gt;
1159 &lt;boot dev='hd'/&gt;
1160 &lt;/os&gt;
1161
1162 &lt;features&gt;
1163 &lt;acpi/&gt;
1164 &lt;apic/&gt;
1165 &lt;/features&gt;
1166
1167 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
1168
1169 &lt;cpu mode='host-model'&gt;
1170 &lt;model fallback='allow'/&gt;
1171 &lt;topology sockets='1' cores='2' threads='1'/&gt;
1172 &lt;numa&gt;
1173 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
1174 &lt;/numa&gt;
1175 &lt;/cpu&gt;
1176
1177 &lt;cputune&gt;
1178 &lt;vcpupin vcpu="0" cpuset="4"/&gt;
1179 &lt;vcpupin vcpu="1" cpuset="5"/&gt;
1180 &lt;/cputune&gt;
1181
1182 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
1183 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
1184 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
1185
1186 &lt;devices&gt;
1187 &lt;emulator&gt;/usr/bin/qemu-system-aarch64&lt;/emulator&gt;
1188 &lt;disk type='file' device='disk'&gt;
1189 &lt;driver name='qemu' type='raw' cache='none'/&gt;
1190 &lt;source file='enea-nfv-access-guest-qemuarm64.ext4'/&gt;
1191 &lt;target dev='vda' bus='virtio'/&gt;
1192 &lt;/disk&gt;
1193
1194 &lt;serial type='pty'&gt;
1195 &lt;target port='0'/&gt;
1196 &lt;/serial&gt;
1197
1198 &lt;console type='pty'&gt;
1199 &lt;target type='serial' port='0'/&gt;
1200 &lt;/console&gt;
1201 &lt;/devices&gt;
1202
1203 &lt;qemu:commandline&gt;
1204 &lt;qemu:arg value='-chardev'/&gt;
1205 &lt;qemu:arg value='socket,id=charnet0,path=/var/run/openvswitch/vhost-user1'/&gt;
1206
1207 &lt;qemu:arg value='-netdev'/&gt;
1208 &lt;qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/&gt;
1209
1210 &lt;qemu:arg value='-device'/&gt;
1211 &lt;qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,/
1212 bus=pcie.0,addr=0x2'/&gt;
1213 &lt;/qemu:commandline&gt;
1214&lt;/domain&gt;</programlisting></para>
1215
1216 <para>The first virtual machine shall be called VM1. Connect to the
1217 first virtual machine console, by running:</para>
1218
1219 <para><programlisting>virsh console nfv-ovs-vm1</programlisting></para>
1220
1221 <para>Connect to Target 2 through a new <literal>SSH</literal>
1222 session, and launch a second VM by creating another XML file and
1223 running <command>virsh create
1224 &lt;XML_FILE2&gt;:</command><programlisting>&lt;domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'&gt;
1225 &lt;name&gt;nfv-ovs-vm2&lt;/name&gt;
1226 &lt;uuid&gt;ed204646-1ad5-11e7-93ae-92361f002623&lt;/uuid&gt;
1227 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
1228 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
1229
1230 &lt;memoryBacking&gt;
1231 &lt;hugepages&gt;
1232 &lt;page size='512' unit='M' nodeset='0'/&gt;
1233 &lt;/hugepages&gt;
1234 &lt;/memoryBacking&gt;
1235
1236 &lt;os&gt;
1237 &lt;type arch='aarch64' machine='virt,gic_version=3'&gt;hvm&lt;/type&gt;
1238 &lt;kernel&gt;Image&lt;/kernel&gt;
1239 &lt;cmdline&gt;root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \
1240 debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \
1241 irqaffinity=0&lt;/cmdline&gt;
1242 &lt;boot dev='hd'/&gt;
1243 &lt;/os&gt;
1244
1245 &lt;features&gt;
1246 &lt;acpi/&gt;
1247 &lt;apic/&gt;
1248 &lt;/features&gt;
1249
1250 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
1251
1252 &lt;cpu mode='host-model'&gt;
1253 &lt;model fallback='allow'/&gt;
1254 &lt;topology sockets='1' cores='2' threads='1'/&gt;
1255 &lt;numa&gt;
1256 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
1257 &lt;/numa&gt;
1258 &lt;/cpu&gt;
1259
1260 &lt;cputune&gt;
1261 &lt;vcpupin vcpu="0" cpuset="6"/&gt;
1262 &lt;vcpupin vcpu="1" cpuset="7"/&gt;
1263 &lt;/cputune&gt;
1264
1265 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
1266 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
1267 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
1268
1269 &lt;devices&gt;
1270 &lt;emulator&gt;/usr/bin/qemu-system-aarch64&lt;/emulator&gt;
1271 &lt;disk type='file' device='disk'&gt;
1272 &lt;driver name='qemu' type='raw' cache='none'/&gt;
1273 &lt;source file='enea-nfv-access-guest-qemuarm64.ext4'/&gt;
1274 &lt;target dev='vda' bus='virtio'/&gt;
1275 &lt;/disk&gt;
1276
1277 &lt;serial type='pty'&gt;
1278 &lt;target port='0'/&gt;
1279 &lt;/serial&gt;
1280
1281 &lt;console type='pty'&gt;
1282 &lt;target type='serial' port='0'/&gt;
1283 &lt;/console&gt;
1284 &lt;/devices&gt;
1285
1286 &lt;qemu:commandline&gt;
1287 &lt;qemu:arg value='-chardev'/&gt;
1288 &lt;qemu:arg value='socket,id=charnet1,path=/var/run/openvswitch/vhost-user2'/&gt;
1289
1290 &lt;qemu:arg value='-netdev'/&gt;
1291 &lt;qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet1'/&gt;
1292
1293 &lt;qemu:arg value='-device'/&gt;
1294 &lt;qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:02,/
1295 bus=pcie.0,addr=0x2'/&gt;
1296 &lt;/qemu:commandline&gt;
1297&lt;/domain&gt;</programlisting></para>
1298
1299 <para>The second virtual machine shall be called VM2. Connect to the
1300 second virtual machine console, by running:</para>
1301
1302 <para><programlisting>virsh console nfv-ovs-vm2</programlisting></para>
1303
1304 <para>Configure the DPDK inside VM1:<programlisting>ifconfig enp0s2 down
1305echo 1 &gt; /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
1306modprobe vfio-pci
1307dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Run testpmd inside
1308 VM1:<programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \
1309 -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \
1310 --no-flush-rx --port-topology=chained</programlisting>Start testpmd inside
1311 VM1:<programlisting>start</programlisting>Configure the DPDK inside
1312 VM2:<programlisting>ifconfig enp0s2 down
1313echo 1 &gt; /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
1314modprobe vfio-pci
1315dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Run testpmd inside
1316 VM2:<programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \
1317 -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \
1318 --no-flush-rx --port-topology=chained</programlisting>Set VM2 for
1319 termination and start testpmd:<programlisting>set fwd rxonly
1320start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use
1321 this command to refresh the testpmd display in VM1 and VM2 and note
1322 the highest values:<programlisting>show port stats 0</programlisting>To
1323 stop traffic from pktgen, in order to choose a different frame
1324 size:<programlisting>stop 0</programlisting>To clear numbers in
1325 testpmd:<programlisting>clear port stats
1326show port stats 0</programlisting>For VM1, we record the stats relevant for
1327 <emphasis role="bold">forwarding</emphasis>:</para>
1328
1329 <itemizedlist>
1330 <listitem>
1331 <para>RX, TX in pps</para>
1332 </listitem>
1333 </itemizedlist>
1334
1335 <para>Only Rx-pps and Tx-pps numbers are important here, they change
1336 every time stats are displayed as long as there is traffic. Run the
1337 command a few times and pick the best (maximum) values
1338 observed.</para>
1339
1340 <para>For VM2, we record the stats relevant for <emphasis
1341 role="bold">termination</emphasis>:</para>
1342
1343 <itemizedlist>
1344 <listitem>
1345 <para>RX in pps (TX will be 0)</para>
1346 </listitem>
1347 </itemizedlist>
1348
1349 <para>For pktgen, we record only the TX side, because flow is
1350 terminated, with no RX traffic reaching pktgen:</para>
1351
1352 <itemizedlist>
1353 <listitem>
1354 <para>TX in pps and Mbit/s</para>
1355 </listitem>
1356 </itemizedlist>
1357
1358 <table>
1359 <title>Results in forwarding mode</title>
1360
1361 <tgroup cols="7">
1362 <tbody>
1363 <row>
1364 <entry align="center"><emphasis
1365 role="bold">Bytes</emphasis></entry>
1366
1367 <entry align="center"><emphasis role="bold">pktgen pps
1368 TX</emphasis></entry>
1369
1370 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1371 RX</emphasis></entry>
1372
1373 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1374 TX</emphasis></entry>
1375
1376 <entry align="center"><emphasis role="bold">VM2 testpmd pps
1377 RX</emphasis></entry>
1378
1379 <entry align="center"><emphasis role="bold">pktgen MBits/s
1380 TX</emphasis></entry>
1381
1382 <entry align="center"><emphasis role="bold">throughput
1383 (%)</emphasis></entry>
1384 </row>
1385
1386 <row>
1387 <entry role="bold"><emphasis
1388 role="bold">64</emphasis></entry>
1389
1390 <entry>14692306</entry>
1391
1392 <entry>1986888</entry>
1393
1394 <entry>1278884</entry>
1395
1396 <entry>1278792</entry>
1397
1398 <entry>9870</entry>
1399
1400 <entry>8.70%</entry>
1401 </row>
1402
1403 <row>
1404 <entry><emphasis role="bold">128</emphasis></entry>
1405
1406 <entry>8445997</entry>
1407
1408 <entry>1910675</entry>
1409
1410 <entry>1205371</entry>
1411
1412 <entry>1205371</entry>
1413
1414 <entry>10000</entry>
1415
1416 <entry>14.27%</entry>
1417 </row>
1418
1419 <row>
1420 <entry role="bold"><emphasis
1421 role="bold">256</emphasis></entry>
1422
1423 <entry>4529126</entry>
1424
1425 <entry>1723468</entry>
1426
1427 <entry>1080976</entry>
1428
1429 <entry>1080977</entry>
1430
1431 <entry>10000</entry>
1432
1433 <entry>23.87%</entry>
1434 </row>
1435
1436 <row>
1437 <entry><emphasis role="bold">512</emphasis></entry>
1438
1439 <entry>2349638</entry>
1440
1441 <entry>1559367</entry>
1442
1443 <entry>972923</entry>
1444
1445 <entry>972921</entry>
1446
1447 <entry>10000</entry>
1448
1449 <entry>41.41%</entry>
1450 </row>
1451
1452 <row>
1453 <entry><emphasis role="bold">1024</emphasis></entry>
1454
1455 <entry>1197322</entry>
1456
1457 <entry>1197318</entry>
1458
1459 <entry>839508</entry>
1460
1461 <entry>839508</entry>
1462
1463 <entry>10000</entry>
1464
1465 <entry>70.12%</entry>
1466 </row>
1467 </tbody>
1468 </tgroup>
1469 </table>
1470 </section>
1471 </section>
1472 </section>
1473 </section>
1474</chapter> \ No newline at end of file