summaryrefslogtreecommitdiffstats
path: root/doc/book-enea-nfv-access-guide/doc
diff options
context:
space:
mode:
authorMiruna Paun <Miruna.Paun@enea.com>2017-07-13 18:20:00 +0200
committerMiruna Paun <Miruna.Paun@enea.com>2017-07-13 18:20:00 +0200
commitc35c9680404dd523f5077505e461def00ae1688b (patch)
tree267b4ab1e8d184750efc4439803d852885e5f2cf /doc/book-enea-nfv-access-guide/doc
parente4dba5e0e5bbacb26034009964a15a4dd0c160f2 (diff)
downloadel_releases-virtualization-c35c9680404dd523f5077505e461def00ae1688b.tar.gz
Removed all mentioned of the word "Platform"
LXCR-7891 Where it didn't do more harm then good to do so. Signed-off-by: Miruna Paun <Miruna.Paun@enea.com>
Diffstat (limited to 'doc/book-enea-nfv-access-guide/doc')
-rw-r--r--doc/book-enea-nfv-access-guide/doc/benchmarks.xml1637
-rw-r--r--doc/book-enea-nfv-access-guide/doc/book.xml30
-rw-r--r--doc/book-enea-nfv-access-guide/doc/container_virtualization.xml136
-rw-r--r--doc/book-enea-nfv-access-guide/doc/dpdk.xml119
-rw-r--r--doc/book-enea-nfv-access-guide/doc/eltf_params_template.xml151
-rw-r--r--doc/book-enea-nfv-access-guide/doc/eltf_params_updated.xml165
-rw-r--r--doc/book-enea-nfv-access-guide/doc/eltf_params_updated_template_how_to_use.txt320
-rw-r--r--doc/book-enea-nfv-access-guide/doc/getting_started.xml340
-rw-r--r--doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml741
-rw-r--r--doc/book-enea-nfv-access-guide/doc/images/virtual_network_functions.pngbin0 -> 24106 bytes
-rw-r--r--doc/book-enea-nfv-access-guide/doc/overview.xml164
-rw-r--r--doc/book-enea-nfv-access-guide/doc/ovs.xml161
-rw-r--r--doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml203
13 files changed, 4167 insertions, 0 deletions
diff --git a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml
new file mode 100644
index 0000000..cba6150
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml
@@ -0,0 +1,1637 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="benchmarks">
5 <title>Benchmarks</title>
6
7 <section id="hw-setup">
8 <title>Hardware Setup</title>
9
10 <para>The following table describes all the needed prequisites for an apt
11 hardware setup:</para>
12
13 <table>
14 <title>Hardware Setup</title>
15
16 <tgroup cols="2">
17 <colspec align="left" />
18
19 <thead>
20 <row>
21 <entry align="center">Item</entry>
22
23 <entry align="center">Description</entry>
24 </row>
25 </thead>
26
27 <tbody>
28 <row>
29 <entry align="left">Server Platform</entry>
30
31 <entry align="left">Supermicro X10SDV-4C-TLN2F
32 http://www.supermicro.com/products/motherboard/xeon/d/X10SDV-4C-TLN2F.cfm</entry>
33 </row>
34
35 <row>
36 <entry align="left">ARCH</entry>
37
38 <entry>x86-64</entry>
39 </row>
40
41 <row>
42 <entry align="left">Processor</entry>
43
44 <entry>1 x Intel Xeon D-1521 (Broadwell), 4 cores, 8
45 hyper-threaded cores per processor</entry>
46 </row>
47
48 <row>
49 <entry align="left">CPU freq</entry>
50
51 <entry>2.40 GHz</entry>
52 </row>
53
54 <row>
55 <entry align="left">RAM</entry>
56
57 <entry>16GB</entry>
58 </row>
59
60 <row>
61 <entry align="left">Network</entry>
62
63 <entry>Dual integrated 10G ports</entry>
64 </row>
65
66 <row>
67 <entry align="left">Storage</entry>
68
69 <entry>Samsung 850 Pro 128GB SSD</entry>
70 </row>
71 </tbody>
72 </tgroup>
73 </table>
74
75 <para>Generic tests configuration:</para>
76
77 <itemizedlist>
78 <listitem>
79 <para>All tests use one port, one core and one Rx/TX queue for fast
80 path traffic.</para>
81 </listitem>
82 </itemizedlist>
83 </section>
84
85 <section id="bios">
86 <title>BIOS Settings</title>
87
88 <para>The table below details the BIOS settings for which the default
89 values were changed when doing performance measurements.</para>
90
91 <table>
92 <title>BIOS Settings</title>
93
94 <tgroup cols="4">
95 <colspec align="left" />
96
97 <thead>
98 <row>
99 <entry align="center">Menu Path</entry>
100
101 <entry align="center">Setting Name</entry>
102
103 <entry align="center">Enea NFV Access value</entry>
104
105 <entry align="center">BIOS Default value</entry>
106 </row>
107 </thead>
108
109 <tbody>
110 <row>
111 <entry align="left">CPU Configuration</entry>
112
113 <entry align="left">Direct Cache Access (DCA)</entry>
114
115 <entry>Enable</entry>
116
117 <entry>Auto</entry>
118 </row>
119
120 <row>
121 <entry>CPU Configuration / Advanced Power Management
122 Configuration</entry>
123
124 <entry align="left">EIST (P-States)</entry>
125
126 <entry>Disable</entry>
127
128 <entry>Enable</entry>
129 </row>
130
131 <row>
132 <entry>CPU Configuration / Advanced Power Management Configuration
133 / CPU C State Control</entry>
134
135 <entry align="left">CPU C State</entry>
136
137 <entry>Disable</entry>
138
139 <entry>Enable</entry>
140 </row>
141
142 <row>
143 <entry>CPU Configuration / Advanced Power Management Configuration
144 / CPU Advanced PM Turning / Energy Perf BIAS</entry>
145
146 <entry align="left">Energy Performance Tuning</entry>
147
148 <entry>Disable</entry>
149
150 <entry>Enable</entry>
151 </row>
152
153 <row>
154 <entry>CPU Configuration / Advanced Power Management Configuration
155 / CPU Advanced PM Turning / Energy Perf BIAS</entry>
156
157 <entry align="left">Energy Performance BIAS Setting</entry>
158
159 <entry>Performance</entry>
160
161 <entry>Balanced Performance</entry>
162 </row>
163
164 <row>
165 <entry>CPU Configuration / Advanced Power Management Configuration
166 / CPU Advanced PM Turning / Energy Perf BIAS</entry>
167
168 <entry align="left">Power/Performance Switch</entry>
169
170 <entry>Disable</entry>
171
172 <entry>Enable</entry>
173 </row>
174
175 <row>
176 <entry>CPU Configuration / Advanced Power Management Configuration
177 / CPU Advanced PM Turning / Program PowerCTL _MSR</entry>
178
179 <entry align="left">Energy Efficient Turbo</entry>
180
181 <entry>Disable</entry>
182
183 <entry>Enable</entry>
184 </row>
185
186 <row>
187 <entry>Chipset Configuration / North Bridge / IIO
188 Configuration</entry>
189
190 <entry align="left">EV DFX Features</entry>
191
192 <entry>Enable</entry>
193
194 <entry>Disable</entry>
195 </row>
196
197 <row>
198 <entry>Chipset Configuration / North Bridge / Memory
199 Configuration</entry>
200
201 <entry align="left">Enforce POR</entry>
202
203 <entry>Disable</entry>
204
205 <entry>Enable</entry>
206 </row>
207
208 <row>
209 <entry>Chipset Configuration / North Bridge / Memory
210 Configuration</entry>
211
212 <entry align="left">Memory Frequency</entry>
213
214 <entry>2400</entry>
215
216 <entry>Auto</entry>
217 </row>
218
219 <row>
220 <entry>Chipset Configuration / North Bridge / Memory
221 Configuration</entry>
222
223 <entry align="left">DRAM RAPL Baseline</entry>
224
225 <entry>Disable</entry>
226
227 <entry>DRAM RAPL Mode 1</entry>
228 </row>
229 </tbody>
230 </tgroup>
231 </table>
232 </section>
233
234 <section id="use-cases">
235 <title>Use Cases</title>
236
237 <section id="docker-benchmarks">
238 <title>Docker related benchmarks</title>
239
240 <section>
241 <title>Forward traffic in Docker</title>
242
243 <para>Benchmarking traffic forwarding using testpmd in a Docker
244 container.</para>
245
246 <para>Pktgen is used to generate UDP traffic that will reach testpmd,
247 running in a Docker image. It will then be forwarded back to source on
248 the return trip (<emphasis role="bold">Forwarding</emphasis>).</para>
249
250 <para>This test measures:</para>
251
252 <itemizedlist>
253 <listitem>
254 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
255 </listitem>
256
257 <listitem>
258 <para>testpmd TX, RX in packets per second (pps)</para>
259 </listitem>
260
261 <listitem>
262 <para>divide testpmd RX / pktgen TX in pps to obtain throughput in
263 percentages (%)</para>
264 </listitem>
265 </itemizedlist>
266
267 <section id="usecase-one">
268 <title>Test Setup for Target 1</title>
269
270 <para>Start by following the steps below:</para>
271
272 <para>SSD boot using the following <literal>grub.cfg</literal>
273 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
274isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
275clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
276processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
277intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
278hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
279
280 <para>Kill unnecessary services:<programlisting>killall ovsdb-server ovs-vswitchd
281rm -rf /etc/openvswitch/*
282mkdir -p /var/run/openvswitch</programlisting>Mount hugepages and configure
283 DPDK:<programlisting>mkdir -p /mnt/huge
284mount -t hugetlbfs nodev /mnt/huge
285modprobe igb_uio
286dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
287 pktgen:<programlisting>cd /usr/share/apps/pktgen/
288./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>In the pktgen console
289 run:<programlisting>str</programlisting>To change framesize for
290 pktgen, from [64, 128, 256, 512]:<programlisting>set 0 size &amp;lt;number&amp;gt;</programlisting></para>
291 </section>
292
293 <section id="usecase-two">
294 <title>Test Setup for Target 2</title>
295
296 <para>Start by following the steps below:</para>
297
298 <para>SSD boot using the following <literal>grub.cfg</literal>
299 entry:</para>
300
301 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
302isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
303clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
304processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
305intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
306hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
307
308 <para>It is expected to have Docker/guest image on target. Configure
309 the OVS bridge:<programlisting># OVS old config clean-up
310killall ovsdb-server ovs-vswitchd
311rm -rf /etc/openvswitch/*
312mkdir -p /var/run/openvswitch
313
314# Mount hugepages and bind interfaces to dpdk
315mkdir -p /mnt/huge
316mount -t hugetlbfs nodev /mnt/huge
317modprobe igb_uio
318dpdk-devbind --bind=igb_uio 0000:03:00.0
319
320# configure openvswitch with DPDK
321export DB_SOCK=/var/run/openvswitch/db.sock
322ovsdb-tool create /etc/openvswitch/conf.db /
323/usr/share/openvswitch/vswitch.ovsschema
324ovsdb-server --remote=punix:$DB_SOCK /
325--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
326ovs-vsctl --no-wait init
327ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
328ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
329ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
330ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
331ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
332--log-file=/var/log/openvswitch/ovs-vswitchd.log
333
334ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
335ovs-vsctl add-port ovsbr0 vhost-user1 /
336-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
337ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface /
338dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=2
339
340# configure static flows
341ovs-ofctl del-flows ovsbr0
342ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
343ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
344 Docker container:<programlisting>docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
345 the Docker container:<programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ /
346-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
347 application in Docker:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
348--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
349-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
350--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
351--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>To
352 start traffic <emphasis role="bold">forwarding</emphasis>, run the
353 following command in testpmd CLI:<programlisting>start</programlisting>To
354 start traffic but in <emphasis role="bold">termination</emphasis>
355 mode (no traffic sent on TX), run following command in testpmd
356 CLI:<programlisting>set fwd rxonly
357start</programlisting><table>
358 <title>Results in forwarding mode</title>
359
360 <tgroup cols="8">
361 <tbody>
362 <row>
363 <entry align="center"><emphasis
364 role="bold">Bytes</emphasis></entry>
365
366 <entry align="center"><emphasis role="bold">pktgen pps
367 TX</emphasis></entry>
368
369 <entry align="center"><emphasis role="bold">pktgen MBits/s
370 TX</emphasis></entry>
371
372 <entry align="center"><emphasis role="bold">pktgen pps
373 RX</emphasis></entry>
374
375 <entry align="center"><emphasis role="bold">pktgen MBits/s
376 RX</emphasis></entry>
377
378 <entry align="center"><emphasis role="bold">testpmd pps
379 RX</emphasis></entry>
380
381 <entry align="center"><emphasis role="bold">testpmd pps
382 TX</emphasis></entry>
383
384 <entry align="center"><emphasis role="bold">throughput
385 (%)</emphasis></entry>
386 </row>
387
388 <row>
389 <entry role="bold"><emphasis
390 role="bold">64</emphasis></entry>
391
392 <entry>14890993</entry>
393
394 <entry>10006</entry>
395
396 <entry>7706039</entry>
397
398 <entry>5178</entry>
399
400 <entry>7692807</entry>
401
402 <entry>7692864</entry>
403
404 <entry>51.74%</entry>
405 </row>
406
407 <row>
408 <entry><emphasis role="bold">128</emphasis></entry>
409
410 <entry>8435104</entry>
411
412 <entry>9999</entry>
413
414 <entry>7689458</entry>
415
416 <entry>9060</entry>
417
418 <entry>7684962</entry>
419
420 <entry>7684904</entry>
421
422 <entry>90.6%</entry>
423 </row>
424
425 <row>
426 <entry role="bold"><emphasis
427 role="bold">256</emphasis></entry>
428
429 <entry>4532384</entry>
430
431 <entry>9999</entry>
432
433 <entry>4532386</entry>
434
435 <entry>9998</entry>
436
437 <entry>4532403</entry>
438
439 <entry>4532403</entry>
440
441 <entry>99.9%</entry>
442 </row>
443 </tbody>
444 </tgroup>
445 </table><table>
446 <title>Results in termination mode</title>
447
448 <tgroup cols="4">
449 <tbody>
450 <row>
451 <entry align="center"><emphasis
452 role="bold">Bytes</emphasis></entry>
453
454 <entry align="center"><emphasis role="bold">pktgen pps
455 TX</emphasis></entry>
456
457 <entry align="center"><emphasis role="bold">testpmd pps
458 RX</emphasis></entry>
459
460 <entry align="center"><emphasis role="bold">throughput
461 (%)</emphasis></entry>
462 </row>
463
464 <row>
465 <entry role="bold"><emphasis
466 role="bold">64</emphasis></entry>
467
468 <entry>14890993</entry>
469
470 <entry>7330403</entry>
471
472 <entry>49,2%</entry>
473 </row>
474
475 <row>
476 <entry><emphasis role="bold">128</emphasis></entry>
477
478 <entry>8435104</entry>
479
480 <entry>7330379</entry>
481
482 <entry>86,9%</entry>
483 </row>
484
485 <row>
486 <entry role="bold"><emphasis
487 role="bold">256</emphasis></entry>
488
489 <entry>4532484</entry>
490
491 <entry>4532407</entry>
492
493 <entry>99,9%</entry>
494 </row>
495 </tbody>
496 </tgroup>
497 </table></para>
498 </section>
499 </section>
500
501 <section id="usecase-three-four">
502 <title>Forward traffic from Docker to another Docker on the same
503 host</title>
504
505 <para>Benchmark a combo test using testpmd running in two Docker
506 instances, one which Forwards traffic to the second one, which
507 Terminates it.</para>
508
509 <para>Packets are generated with pktgen and TX-d to the first testpmd,
510 which will RX and Forward them to the second testpmd, which will RX
511 and terminate them.</para>
512
513 <para>Measurements are made in:</para>
514
515 <itemizedlist>
516 <listitem>
517 <para>pktgen TX in pps and Mbits/s</para>
518 </listitem>
519
520 <listitem>
521 <para>testpmd TX and RX pps in Docker1</para>
522 </listitem>
523
524 <listitem>
525 <para>testpmd RX pps in Docker2</para>
526 </listitem>
527 </itemizedlist>
528
529 <para>Throughput found as a percent, by dividing Docker2 <emphasis
530 role="bold">testpmd RX pps</emphasis> by <emphasis role="bold">pktgen
531 TX pps</emphasis>.</para>
532
533 <section id="target-one-usecase-three">
534 <title>Test Setup for Target 1</title>
535
536 <para>Start by following the steps below:</para>
537
538 <para>SSD boot using the following <literal>grub.cfg</literal>
539 entry:</para>
540
541 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
542isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
543clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
544processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
545intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
546hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
547
548 <para>Configure DPDK:<programlisting>mkdir -p /mnt/huge
549mount -t hugetlbfs nodev /mnt/huge
550modprobe igb_uio
551dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
552 pktgen:<programlisting>cd /usr/share/apps/pktgen/
553./pktgen -c 0xF -n 1 -- -P -m "[3:2].0"</programlisting>Choose one of the
554 values from [64, 128, 256, 512] to change the packet
555 size:<programlisting>set 0 size &lt;number&gt;</programlisting></para>
556 </section>
557
558 <section id="target-two-usecase-four">
559 <title>Test Setup for Target 2</title>
560
561 <para>Start by following the steps below:</para>
562
563 <para>SSD boot using the following <literal>grub.cfg</literal>
564 entry:</para>
565
566 <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
567isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
568clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
569processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 /
570iommu=pt intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
571hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>
572
573 <para><programlisting>killall ovsdb-server ovs-vswitchd
574rm -rf /etc/openvswitch/*
575mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
576mount -t hugetlbfs nodev /mnt/huge
577modprobe igb_uio
578dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure the OVS
579 bridge:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
580ovsdb-tool create /etc/openvswitch/conf.db /
581/usr/share/openvswitch/vswitch.ovsschema
582ovsdb-server --remote=punix:$DB_SOCK /
583--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
584ovs-vsctl --no-wait init
585ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
586ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xcc
587ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
588ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
589ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
590--log-file=/var/log/openvswitch/ovs-vswitchd.log
591ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
592ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface /
593vhost-user1 type=dpdkvhostuser ofport_request=1
594ovs-vsctl add-port ovsbr0 vhost-user2 -- set Interface /
595vhost-user2 type=dpdkvhostuser ofport_request=2
596ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
597type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=3
598ovs-ofctl del-flows ovsbr0
599ovs-ofctl add-flow ovsbr0 in_port=3,action=output:2
600ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a
601 Docker container:<programlisting>docker import enea-image-virtualization-guest-qemux86-64.tar.gz el7_guest</programlisting>Start
602 the first Docker:<programlisting>docker run -it --rm --cpuset-cpus=4,5 /
603-v /var/run/openvswitch/:/var/run/openvswitch/ /
604-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>Start the testpmd
605 application in Docker1:<programlisting>testpmd -c 0x30 -n 2 --file-prefix prog1 --socket-mem 512 --no-pci /
606--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user1 /
607-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
608--disable-rss -i --portmask=0x1 --coremask=0x20 --nb-cores=1 /
609--rxq=1 --txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Configure
610 it in termination mode:<programlisting>set fwd rxonly</programlisting>Run
611 the testpmd application:<programlisting>start</programlisting>Open a
612 new console to the host and start the second Docker
613 instance:<programlisting>docker run -it --rm --cpuset-cpus=0,1 -v /var/run/openvswitch/:/var/run/openvswitch/ /
614-v /mnt/huge:/mnt/huge el7_guest /bin/bash</programlisting>In the second
615 container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci /
616--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
617-d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlan</programlisting>Run
618 the TestPmd application in the second Docker:<programlisting>testpmd -c 0x3 -n 2 --file-prefix prog2 --socket-mem 512 --no-pci /
619--vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 /
620-d /usr/lib/librte_pmd_virtio.so.1.1 -- --burst 64 --disable-hw-vlan /
621--disable-rss -i --portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
622--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>In
623 the testpmd shell, run:<programlisting>start</programlisting>Start
624 pktgen traffic by running the following command in pktgen
625 CLI:<programlisting>start 0</programlisting>To record traffic
626 results:<programlisting>show port stats 0</programlisting>This
627 should be used in testpmd applications.</para>
628
629 <table>
630 <title>Results</title>
631
632 <tgroup cols="5">
633 <tbody>
634 <row>
635 <entry align="center"><emphasis
636 role="bold">Bytes</emphasis></entry>
637
638 <entry align="center"><emphasis role="bold">Target 1 -
639 pktgen pps TX</emphasis></entry>
640
641 <entry align="center"><emphasis role="bold">Target 2 -
642 (forwarding) testpmd pps RX</emphasis></entry>
643
644 <entry align="center"><emphasis role="bold">Target 2 -
645 (forwarding) testpmd pps TX</emphasis></entry>
646
647 <entry align="center"><emphasis role="bold">Target 2 -
648 (termination) testpmd pps RX</emphasis></entry>
649 </row>
650
651 <row>
652 <entry role="bold"><emphasis
653 role="bold">64</emphasis></entry>
654
655 <entry>14844628</entry>
656
657 <entry>5643565</entry>
658
659 <entry>3459922</entry>
660
661 <entry>3457326</entry>
662 </row>
663
664 <row>
665 <entry><emphasis role="bold">128</emphasis></entry>
666
667 <entry>8496962</entry>
668
669 <entry>5667860</entry>
670
671 <entry>3436811</entry>
672
673 <entry>3438918</entry>
674 </row>
675
676 <row>
677 <entry role="bold"><emphasis
678 role="bold">256</emphasis></entry>
679
680 <entry>4532372</entry>
681
682 <entry>4532362</entry>
683
684 <entry>3456623</entry>
685
686 <entry>3457115</entry>
687 </row>
688
689 <row>
690 <entry><emphasis role="bold">512</emphasis></entry>
691
692 <entry>2367641</entry>
693
694 <entry>2349450</entry>
695
696 <entry>2349450</entry>
697
698 <entry>2349446</entry>
699 </row>
700 </tbody>
701 </tgroup>
702 </table>
703 </section>
704 </section>
705
706 <section id="pxe-config-docker">
707 <title>SR-IOV in in Docker</title>
708
709 <para>PCI passthrough tests using pktgen and testpmd in Docker.</para>
710
711 <para>pktgen[DPDK]Docker - PHY - Docker[DPDK] testpmd</para>
712
713 <para>Measurements:</para>
714
715 <itemizedlist>
716 <listitem>
717 <para>RX packets per second in testpmd (with testpmd configured in
718 rxonly mode).</para>
719 </listitem>
720 </itemizedlist>
721
722 <section id="target-setup">
723 <title>Test Setup</title>
724
725 <para>Boot Enea NFV Access from SSD:<programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
726isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable clocksource=tsc /
727tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 processor.max_cstate=0 /
728mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt intel_iommu=on hugepagesz=1GB /
729hugepages=8 default_hugepagesz=1GB hugepagesz=2M hugepages=2048 /
730vfio_iommu_type1.allow_unsafe_interrupts=1l</programlisting>Allow unsafe
731 interrupts:<programlisting>echo 1 &gt; /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts</programlisting>Configure
732 DPDK:<programlisting>mkdir -p /mnt/huge
733mount -t hugetlbfs nodev /mnt/huge
734dpdk-devbind.py --bind=ixgbe 0000:03:00.0
735ifconfig eno3 192.168.1.2
736echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
737modprobe vfio-pci
738dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
739dpdk-devbind.py --bind=vfio-pci 0000:03:10.2</programlisting>Start two docker
740 containers:<programlisting>docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
741--device /dev/vfio/vfio el7_guest /bin/bash
742docker run --privileged -it --rm -v /mnt/huge:/mnt/huge/ /
743--device /dev/vfio/vfio el7_guest /bin/bash</programlisting>In the first
744 container start pktgen:<programlisting>cd /usr/share/apps/pktgen/
745./pktgen -c 0x1f -w 0000:03:10.0 -n 1 --file-prefix pg1 /
746--socket-mem 1024 -- -P -m "[3:4].0"</programlisting>In the pktgen prompt set
747 the destination MAC address:<programlisting>set mac 0 XX:XX:XX:XX:XX:XX
748str</programlisting>In the second container start testpmd:<programlisting>testpmd -c 0x7 -n 1 -w 0000:03:10.2 -- -i --portmask=0x1 /
749--txd=256 --rxd=256 --port-topology=chained</programlisting>In the testpmd
750 prompt set <emphasis role="bold">forwarding</emphasis>
751 rxonly:<programlisting>set fwd rxonly
752start</programlisting><table>
753 <title>Results</title>
754
755 <tgroup cols="5">
756 <tbody>
757 <row>
758 <entry align="center"><emphasis
759 role="bold">Bytes</emphasis></entry>
760
761 <entry align="center"><emphasis role="bold">pktgen pps
762 TX</emphasis></entry>
763
764 <entry align="center"><emphasis role="bold">testpmd pps
765 RX</emphasis></entry>
766
767 <entry align="center"><emphasis role="bold">pktgen MBits/s
768 TX</emphasis></entry>
769
770 <entry align="center"><emphasis role="bold">throughput
771 (%)</emphasis></entry>
772 </row>
773
774 <row>
775 <entry role="bold"><emphasis
776 role="bold">64</emphasis></entry>
777
778 <entry>14525286</entry>
779
780 <entry>14190869</entry>
781
782 <entry>9739</entry>
783
784 <entry>97.7</entry>
785 </row>
786
787 <row>
788 <entry><emphasis role="bold">128</emphasis></entry>
789
790 <entry>8456960</entry>
791
792 <entry>8412172</entry>
793
794 <entry>10013</entry>
795
796 <entry>99.4</entry>
797 </row>
798
799 <row>
800 <entry role="bold"><emphasis
801 role="bold">256</emphasis></entry>
802
803 <entry>4566624</entry>
804
805 <entry>4526587</entry>
806
807 <entry>10083</entry>
808
809 <entry>99.1</entry>
810 </row>
811
812 <row>
813 <entry><emphasis role="bold">512</emphasis></entry>
814
815 <entry>2363744</entry>
816
817 <entry>2348015</entry>
818
819 <entry>10060</entry>
820
821 <entry>99.3</entry>
822 </row>
823 </tbody>
824 </tgroup>
825 </table></para>
826 </section>
827 </section>
828 </section>
829
830 <section id="vm-benchmarks">
831 <title>VM related benchmarks</title>
832
833 <section id="usecase-four">
834 <title>Forward/termination traffic in one VM</title>
835
836 <para>Benchmarking traffic (UDP) forwarding and termination using
837 testpmd in a virtual machine.</para>
838
839 <para>The Pktgen application is used to generate traffic that will
840 reach testpmd running on a virtual machine, and be forwarded back to
841 source on the return trip. With the same setup a second measurement
842 will be done with traffic termination in the virtual machine.</para>
843
844 <para>This test case measures:</para>
845
846 <itemizedlist>
847 <listitem>
848 <para>pktgen TX, RX in packets per second (pps) and Mbps</para>
849 </listitem>
850
851 <listitem>
852 <para>testpmd TX, RX in packets per second (pps)</para>
853 </listitem>
854
855 <listitem>
856 <para>divide <emphasis role="bold">testpmd RX</emphasis> by
857 <emphasis role="bold">pktgen TX</emphasis> in pps to obtain the
858 throughput in percentages (%)</para>
859 </listitem>
860 </itemizedlist>
861
862 <section id="targetone-usecasefour">
863 <title>Test Setup for Target 1</title>
864
865 <para>Start with the steps below:</para>
866
867 <para>SSD boot using the following <literal>grub.cfg
868 </literal>entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
869isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
870clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
871processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
872intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
873hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting></para>
874
875 <para>Kill unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
876rm -rf /etc/openvswitch/*
877mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
878mount -t hugetlbfs nodev /mnt/huge
879modprobe igb_uio
880dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
881 pktgen:<programlisting>cd /usr/share/apps/pktgen/
882./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
883-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
884 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
885 </section>
886
887 <section id="targettwo-usecasefive">
888 <title>Test Setup for Target 2</title>
889
890 <para>Start by following the steps below:</para>
891
892 <para>SSD boot using the following <literal>grub.cfg</literal>
893 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
894isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
895clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
896processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
897intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
898hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
899 unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd
900rm -rf /etc/openvswitch/*
901mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
902mount -t hugetlbfs nodev /mnt/huge
903modprobe igb_uio
904dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
905 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
906ovsdb-tool create /etc/openvswitch/conf.db /
907/usr/share/openvswitch/vswitch.ovsschema
908ovsdb-server --remote=punix:$DB_SOCK /
909--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
910ovs-vsctl --no-wait init
911ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
912ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
913ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
914ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
915ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
916--log-file=/var/log/openvswitch/ovs-vswitchd.log
917
918ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
919ovs-vsctl add-port ovsbr0 vhost-user1 /
920-- set Interface vhost-user1 type=dpdkvhostuser -- set Interface /
921vhost-user1 ofport_request=2
922ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 /
923type=dpdk options:dpdk-devargs=0000:03:00.0 /
924-- set Interface dpdk0 ofport_request=1
925chmod 777 /var/run/openvswitch/vhost-user1
926
927ovs-ofctl del-flows ovsbr0
928ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
929ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Launch
930 QEMU:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
931-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 /
932-enable-kvm -nographic -realtime mlock=on -kernel /mnt/qemu/bzImage /
933-drive file=/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
934if=virtio,format=raw -m 4096 -object memory-backend-file,id=mem,/
935size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
936-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
937-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
938-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
939mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
940guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
941hugepagesz=2M hugepages=1024 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
942irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
943processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Inside QEMU,
944 configure DPDK: <programlisting>mkdir -p /mnt/huge
945mount -t hugetlbfs nodev /mnt/huge
946modprobe igb_uio
947dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Inside QEMU, run
948 testpmd: <programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
949-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
950--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 --rxd=512 /
951--txqflags=0xf00 --port-topology=chained</programlisting>For the <emphasis
952 role="bold">Forwarding test</emphasis>, start testpmd
953 directly:<programlisting>start</programlisting>For the <emphasis
954 role="bold">Termination test</emphasis>, set testpmd to only
955 receive, then start it:<programlisting>set fwd rxonly
956start</programlisting>On target 1, you may start pktgen traffic
957 now:<programlisting>start 0</programlisting>On target 2, use this
958 command to refresh the testpmd display and note the highest
959 values:<programlisting>show port stats 0</programlisting>To stop
960 traffic from pktgen, in order to choose a different frame
961 size:<programlisting>stop 0</programlisting>To clear numbers in
962 testpmd:<programlisting>clear port stats
963show port stats 0</programlisting><table>
964 <title>Results in forwarding mode</title>
965
966 <tgroup cols="8">
967 <tbody>
968 <row>
969 <entry align="center"><emphasis
970 role="bold">Bytes</emphasis></entry>
971
972 <entry align="center"><emphasis role="bold">pktgen pps
973 RX</emphasis></entry>
974
975 <entry align="center"><emphasis role="bold">pktgen pps
976 TX</emphasis></entry>
977
978 <entry align="center"><emphasis role="bold">testpmd pps
979 RX</emphasis></entry>
980
981 <entry align="center"><emphasis role="bold">testpmd pps
982 TX</emphasis></entry>
983
984 <entry align="center"><emphasis role="bold">pktgen MBits/s
985 RX</emphasis></entry>
986
987 <entry align="center"><emphasis role="bold">pktgen MBits/s
988 TX</emphasis></entry>
989
990 <entry align="center"><emphasis role="bold">throughput
991 (%)</emphasis></entry>
992 </row>
993
994 <row>
995 <entry role="bold"><emphasis
996 role="bold">64</emphasis></entry>
997
998 <entry>7755769</entry>
999
1000 <entry>14858714</entry>
1001
1002 <entry>7755447</entry>
1003
1004 <entry>7755447</entry>
1005
1006 <entry>5207</entry>
1007
1008 <entry>9984</entry>
1009
1010 <entry>52.2</entry>
1011 </row>
1012
1013 <row>
1014 <entry><emphasis role="bold">128</emphasis></entry>
1015
1016 <entry>7714626</entry>
1017
1018 <entry>8435184</entry>
1019
1020 <entry>7520349</entry>
1021
1022 <entry>6932520</entry>
1023
1024 <entry>8204</entry>
1025
1026 <entry>9986</entry>
1027
1028 <entry>82.1</entry>
1029 </row>
1030
1031 <row>
1032 <entry role="bold"><emphasis
1033 role="bold">256</emphasis></entry>
1034
1035 <entry>4528847</entry>
1036
1037 <entry>4528854</entry>
1038
1039 <entry>4529030</entry>
1040
1041 <entry>4529034</entry>
1042
1043 <entry>9999</entry>
1044
1045 <entry>9999</entry>
1046
1047 <entry>99.9</entry>
1048 </row>
1049 </tbody>
1050 </tgroup>
1051 </table><table>
1052 <title>Results in termination mode</title>
1053
1054 <tgroup cols="5">
1055 <tbody>
1056 <row>
1057 <entry align="center"><emphasis
1058 role="bold">Bytes</emphasis></entry>
1059
1060 <entry align="center"><emphasis role="bold">pktgen pps
1061 TX</emphasis></entry>
1062
1063 <entry align="center"><emphasis role="bold">testpmd pps
1064 RX</emphasis></entry>
1065
1066 <entry align="center"><emphasis role="bold">pktgen MBits/s
1067 TX</emphasis></entry>
1068
1069 <entry align="center"><emphasis role="bold">throughput
1070 (%)</emphasis></entry>
1071 </row>
1072
1073 <row>
1074 <entry role="bold"><emphasis
1075 role="bold">64</emphasis></entry>
1076
1077 <entry>15138992</entry>
1078
1079 <entry>7290663</entry>
1080
1081 <entry>10063</entry>
1082
1083 <entry>48.2</entry>
1084 </row>
1085
1086 <row>
1087 <entry><emphasis role="bold">128</emphasis></entry>
1088
1089 <entry>8426825</entry>
1090
1091 <entry>6902646</entry>
1092
1093 <entry>9977</entry>
1094
1095 <entry>81.9</entry>
1096 </row>
1097
1098 <row>
1099 <entry role="bold"><emphasis
1100 role="bold">256</emphasis></entry>
1101
1102 <entry>4528957</entry>
1103
1104 <entry>4528912</entry>
1105
1106 <entry>9999</entry>
1107
1108 <entry>100</entry>
1109 </row>
1110 </tbody>
1111 </tgroup>
1112 </table></para>
1113 </section>
1114 </section>
1115
1116 <section id="usecase-six">
1117 <title>Forward traffic between two VMs</title>
1118
1119 <para>Benchmark a combo test using two virtual machines, the first
1120 with traffic forwarding to the second, which terminates it.</para>
1121
1122 <para>Measurements are made in:</para>
1123
1124 <itemizedlist>
1125 <listitem>
1126 <para>pktgen TX in pps and Mbits/s</para>
1127 </listitem>
1128
1129 <listitem>
1130 <para>testpmd TX and RX pps in VM1</para>
1131 </listitem>
1132
1133 <listitem>
1134 <para>testpmd RX pps in VM2</para>
1135 </listitem>
1136
1137 <listitem>
1138 <para>throughput in percents, by dividing<emphasis role="bold">
1139 VM2 testpmd RX pps</emphasis> by <emphasis role="bold">pktgen TX
1140 pps</emphasis></para>
1141 </listitem>
1142 </itemizedlist>
1143
1144 <section id="targetone-usecase-five">
1145 <title>Test Setup for Target 1</title>
1146
1147 <para>Start by doing the following:</para>
1148
1149 <para>SSD boot using the following <literal>grub.cfg</literal>
1150 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1151isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1152clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1153processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1154intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1155hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
1156 Services:<programlisting>killall ovsdb-server ovs-vswitchd
1157rm -rf /etc/openvswitch/*
1158mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
1159mount -t hugetlbfs nodev /mnt/huge
1160modprobe igb_uio
1161dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Run
1162 pktgen:<programlisting>cd /usr/share/apps/pktgen/
1163./pktgen -c 0x7 -n 4 --proc-type auto --socket-mem 256 /
1164-w 0000:03:00.0 -- -P -m "[1:2].0"</programlisting>Set pktgen frame size to
1165 use from [64, 128, 256, 512]:<programlisting>set 0 size 64</programlisting></para>
1166 </section>
1167
1168 <section id="targettwo-usecase-six">
1169 <title>Test Setup for Target 2</title>
1170
1171 <para>Start by doing the following:</para>
1172
1173 <para>SSD boot using the following <literal>grub.cfg</literal>
1174 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1175isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1176clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1177processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1178intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1179hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill
1180 Services:<programlisting>killall ovsdb-server ovs-vswitchd
1181rm -rf /etc/openvswitch/*
1182mkdir -p /var/run/openvswitch</programlisting>Configure DPDK:<programlisting>mkdir -p /mnt/huge
1183mount -t hugetlbfs nodev /mnt/huge
1184modprobe igb_uio
1185dpdk-devbind --bind=igb_uio 0000:03:00.0</programlisting>Configure
1186 OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
1187ovsdb-tool create /etc/openvswitch/conf.db /
1188/usr/share/openvswitch/vswitch.ovsschema
1189ovsdb-server --remote=punix:$DB_SOCK /
1190--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
1191ovs-vsctl --no-wait init
1192ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10
1193ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
1194ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048
1195ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
1196ovs-vswitchd unix:$DB_SOCK --pidfile /
1197--detach --log-file=/var/log/openvswitch/ovs-vswitchd.log
1198
1199
1200ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
1201ovs-vsctl add-port ovsbr0 dpdk0 /
1202-- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:03:00.0 ofport_request=1
1203ovs-vsctl add-port ovsbr0 vhost-user1 /
1204-- set Interface vhost-user1 type=dpdkvhostuser ofport_request=2
1205ovs-vsctl add-port ovsbr0 vhost-user2 /
1206-- set Interface vhost-user2 type=dpdkvhostuser ofport_request=3
1207
1208
1209ovs-ofctl del-flows ovsbr0
1210ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
1211ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Launch
1212 first QEMU instance, VM1:<programlisting>taskset -c 0,1 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M q35 /
1213-smp cores=2,sockets=1 -vcpu 0,affinity=0 -vcpu 1,affinity=1 -enable-kvm /
1214-nographic -realtime mlock=on -kernel /home/root/qemu/bzImage /
1215-drive file=/home/root/qemu/enea-image-virtualization-guest-qemux86-64.ext4,/
1216if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,/
1217size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem /
1218-mem-prealloc -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 /
1219-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce /
1220-device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,/
1221mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
1222guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
1223hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
1224irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
1225processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Connect to
1226 Target 2 through a new SSH session and run a second QEMU instance
1227 (to get its own console, separate from instance VM1). We shall call
1228 this VM2:<programlisting>taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no /
1229-M q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 /
1230-enable-kvm -nographic -realtime mlock=on -kernel /home/root/qemu2/bzImage /
1231-drive file=/home/root/qemu2/enea-image-virtualization-guest-qemux86-64.ext4,/
1232if=virtio,format=raw -m 2048 -object memory-backend-file,id=mem,size=2048M,/
1233mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc /
1234-chardev socket,id=char1,path=/var/run/openvswitch/vhost-user2 /
1235-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce /
1236-device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,/
1237mrg_rxbuf=on,rx_queue_size=1024,csum=off,gso=off,guest_tso4=off,/
1238guest_tso6=off,guest_ecn=off -append 'root=/dev/vda console=ttyS0 /
1239hugepagesz=2M hugepages=512 isolcpus=1 nohz_full=1 rcu_nocbs=1 /
1240irqaffinity=0 rcu_nocb_poll intel_pstate=disable intel_idle.max_cstate=0 /
1241processor.max_cstate=0 mce=ignore_ce audit=0'</programlisting>Configure DPDK
1242 inside VM1:<programlisting>mkdir -p /mnt/huge
1243mount -t hugetlbfs nodev /mnt/huge
1244modprobe igb_uio
1245dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
1246 VM1:<programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
1247-- --burst 64 --disable-hw-vlan --disable-rss -i /
1248--portmask=0x1 --coremask=0x2 --nb-cores=1 --rxq=1 /
1249--txq=1 --txd=512 --rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Start
1250 testpmd inside VM1:<programlisting>start</programlisting>Configure
1251 DPDK inside VM2:<programlisting>mkdir -p /mnt/huge
1252mount -t hugetlbfs nodev /mnt/huge
1253modprobe igb_uio
1254dpdk-devbind --bind=igb_uio 0000:00:02.0</programlisting>Run testpmd inside
1255 VM2:<programlisting>testpmd -c 0x3 -n 2 -d librte_pmd_virtio.so.1.1 /
1256-- --burst 64 --disable-hw-vlan --disable-rss -i --portmask=0x1 /
1257--coremask=0x2 --nb-cores=1 --rxq=1 --txq=1 --txd=512 /
1258--rxd=512 --txqflags=0xf00 --port-topology=chained</programlisting>Set VM2 for
1259 termination and start testpmd:<programlisting>set fwd rxonly
1260start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use
1261 this command to refresh testpmd display in VM1 and VM2 and note the
1262 highest values:<programlisting>show port stats 0</programlisting>To
1263 stop traffic from pktgen, in order to choose a different frame
1264 size:<programlisting>stop 0</programlisting>To clear numbers in
1265 testpmd:<programlisting>clear port stats
1266show port stats 0</programlisting>For VM1, we record the stats relevant for
1267 <emphasis role="bold">forwarding</emphasis>:</para>
1268
1269 <itemizedlist>
1270 <listitem>
1271 <para>RX, TX in pps</para>
1272 </listitem>
1273 </itemizedlist>
1274
1275 <para>Only Rx-pps and Tx-pps numbers are important here, they change
1276 every time stats are displayed as long as there is traffic. Run the
1277 command a few times and pick the best (maximum) values seen.</para>
1278
1279 <para>For VM2, we record the stats relevant for <emphasis
1280 role="bold">termination</emphasis>:</para>
1281
1282 <itemizedlist>
1283 <listitem>
1284 <para>RX in pps (TX will be 0)</para>
1285 </listitem>
1286 </itemizedlist>
1287
1288 <para>For pktgen, we record only the TX side, because flow is
1289 terminated, with no RX traffic reaching pktgen:</para>
1290
1291 <itemizedlist>
1292 <listitem>
1293 <para>TX in pps and Mbit/s</para>
1294 </listitem>
1295 </itemizedlist>
1296
1297 <table>
1298 <title>Results in forwarding mode</title>
1299
1300 <tgroup cols="7">
1301 <tbody>
1302 <row>
1303 <entry align="center"><emphasis
1304 role="bold">Bytes</emphasis></entry>
1305
1306 <entry align="center"><emphasis role="bold">pktgen pps
1307 TX</emphasis></entry>
1308
1309 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1310 RX</emphasis></entry>
1311
1312 <entry align="center"><emphasis role="bold">VM1 testpmd pps
1313 TX</emphasis></entry>
1314
1315 <entry align="center"><emphasis role="bold">VM2 testpmd pps
1316 RX</emphasis></entry>
1317
1318 <entry align="center"><emphasis role="bold">pktgen MBits/s
1319 TX</emphasis></entry>
1320
1321 <entry align="center"><emphasis role="bold">throughput
1322 (%)</emphasis></entry>
1323 </row>
1324
1325 <row>
1326 <entry role="bold"><emphasis
1327 role="bold">64</emphasis></entry>
1328
1329 <entry>14845113</entry>
1330
1331 <entry>6826540</entry>
1332
1333 <entry>5389680</entry>
1334
1335 <entry>5383577</entry>
1336
1337 <entry>9975</entry>
1338
1339 <entry>36.2</entry>
1340 </row>
1341
1342 <row>
1343 <entry><emphasis role="bold">128</emphasis></entry>
1344
1345 <entry>8426683</entry>
1346
1347 <entry>6825857</entry>
1348
1349 <entry>5386971</entry>
1350
1351 <entry>5384530</entry>
1352
1353 <entry>9976</entry>
1354
1355 <entry>63.9</entry>
1356 </row>
1357
1358 <row>
1359 <entry role="bold"><emphasis
1360 role="bold">256</emphasis></entry>
1361
1362 <entry>4528894</entry>
1363
1364 <entry>4507975</entry>
1365
1366 <entry>4507958</entry>
1367
1368 <entry>4532457</entry>
1369
1370 <entry>9999</entry>
1371
1372 <entry>100</entry>
1373 </row>
1374 </tbody>
1375 </tgroup>
1376 </table>
1377 </section>
1378 </section>
1379
1380 <section id="pxe-config-vm">
1381 <title>SR-IOV in Virtual Machines</title>
1382
1383 <para>PCI passthrough tests using pktgen and testpmd in virtual
1384 machines.</para>
1385
1386 <para>pktgen[DPDK]VM - PHY - VM[DPDK] testpmd.</para>
1387
1388 <para>Measurements:</para>
1389
1390 <itemizedlist>
1391 <listitem>
1392 <para>pktgen to testpmd in <emphasis
1393 role="bold">forwarding</emphasis> mode.</para>
1394 </listitem>
1395
1396 <listitem>
1397 <para>pktgen to testpmd in <emphasis
1398 role="bold">termination</emphasis> mode.</para>
1399 </listitem>
1400 </itemizedlist>
1401
1402 <section id="test-setup-target-four">
1403 <title>Test Setup</title>
1404
1405 <para>SSD boot using the following <literal>grub.cfg</literal>
1406 entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 /
1407isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable /
1408clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 /
1409processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt /
1410intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB /
1411hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Stop
1412 other services and mount hugepages: <programlisting>systemctl stop openvswitch
1413mkdir -p /mnt/huge
1414mount -t hugetlbfs hugetlbfs /mnt/huge</programlisting>Configure SR-IOV
1415 interfaces:<programlisting>/usr/share/usertools/dpdk-devbind.py --bind=ixgbe 0000:03:00.0
1416echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
1417ifconfig eno3 10.0.0.1
1418modprobe vfio_pci
1419/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
1420/usr/share/usertools/dpdk-devbind.py --bind=vfio-pci 0000:03:10.2
1421ip link set eno3 vf 0 mac 0c:c4:7a:E5:0F:48
1422ip link set eno3 vf 1 mac 0c:c4:7a:BF:52:E7</programlisting>Launch two QEMU
1423 instances: <programlisting>taskset -c 4,5 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
1424q35 -smp cores=2,sockets=1 -vcpu 0,affinity=4 -vcpu 1,affinity=5 -enable-kvm /
1425-nographic -kernel /mnt/qemu/bzImage /
1426-drive file=/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4,if=virtio,/
1427format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
1428share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.0 /
1429-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
1430isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
1431intel_pstate=disable intel_idle.max_cstate=0 /
1432processor.max_cstate=0 mce=ignore_ce audit=0'
1433
1434
1435taskset -c 2,3 qemu-system-x86_64 -cpu host,+invtsc,migratable=no -M /
1436q35 -smp cores=2,sockets=1 -vcpu 0,affinity=2 -vcpu 1,affinity=3 -enable-kvm /
1437-nographic -kernel /mnt/qemu/bzImage /
1438-drive file=/mnt/qemu/enea-image2-virtualization-guest-qemux86-64.ext4,if=virtio,/
1439format=raw -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,/
1440share=on -numa node,memdev=mem -mem-prealloc -device vfio-pci,host=03:10.2 /
1441-append 'root=/dev/vda console=ttyS0 hugepagesz=2M hugepages=1024 /
1442isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0 rcu_nocb_poll /
1443intel_pstate=disable intel_idle.max_cstate=0 processor.max_cstate=0 /
1444mce=ignore_ce audit=0'</programlisting>In the first VM, mount hugepages and
1445 start pktgen:<programlisting>mkdir -p /mnt/huge &amp;&amp; \
1446mount -t hugetlbfs hugetlbfs /mnt/huge
1447modprobe igb_uio
1448/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
1449cd /usr/share/apps/pktgen
1450./pktgen -c 0x3 -- -P -m "1.0"</programlisting>In the pktgen console set the
1451 MAC of the destination and start generating
1452 packages:<programlisting>set mac 0 0C:C4:7A:BF:52:E7
1453str</programlisting>In the second VM, mount hugepages and start
1454 testpmd:<programlisting>mkdir -p /mnt/huge &amp;&amp; \
1455mount -t hugetlbfs hugetlbfs /mnt/huge
1456modprobe igb_uio
1457/usr/share/usertools/dpdk-devbind.py --bind=igb_uio 0000:00:03.0
1458testpmd -c 0x3 -n 2 -- -i --txd=512 --rxd=512 --port-topology=chained /
1459--eth-peer=0,0c:c4:7a:e5:0f:48</programlisting>In order to enable <emphasis
1460 role="bold">forwarding</emphasis> mode, in the testpmd console,
1461 run:<programlisting>set fwd mac
1462start</programlisting>In order to enable <emphasis
1463 role="bold">termination</emphasis> mode, in the testpmd console,
1464 run:<programlisting>set fwd rxonly
1465start</programlisting><table>
1466 <title>Results in forwarding mode</title>
1467
1468 <tgroup cols="5">
1469 <tbody>
1470 <row>
1471 <entry align="center"><emphasis
1472 role="bold">Bytes</emphasis></entry>
1473
1474 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1475 TX</emphasis></entry>
1476
1477 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1478 RX</emphasis></entry>
1479
1480 <entry align="center"><emphasis role="bold">VM2 testpmd
1481 pps RX</emphasis></entry>
1482
1483 <entry align="center"><emphasis role="bold">VM2 testpmd
1484 pps RX</emphasis></entry>
1485 </row>
1486
1487 <row>
1488 <entry role="bold"><emphasis
1489 role="bold">64</emphasis></entry>
1490
1491 <entry>7105645</entry>
1492
1493 <entry>7103976</entry>
1494
1495 <entry>7101487</entry>
1496
1497 <entry>7101487</entry>
1498 </row>
1499
1500 <row>
1501 <entry><emphasis role="bold">128</emphasis></entry>
1502
1503 <entry>5722795</entry>
1504
1505 <entry>5722252</entry>
1506
1507 <entry>5704219</entry>
1508
1509 <entry>5704219</entry>
1510 </row>
1511
1512 <row>
1513 <entry role="bold"><emphasis
1514 role="bold">256</emphasis></entry>
1515
1516 <entry>3454075</entry>
1517
1518 <entry>3455144</entry>
1519
1520 <entry>3452020</entry>
1521
1522 <entry>3452020</entry>
1523 </row>
1524
1525 <row>
1526 <entry role="bold"><emphasis
1527 role="bold">512</emphasis></entry>
1528
1529 <entry>1847751</entry>
1530
1531 <entry>1847751</entry>
1532
1533 <entry>1847751</entry>
1534
1535 <entry>1847751</entry>
1536 </row>
1537
1538 <row>
1539 <entry role="bold"><emphasis
1540 role="bold">1024</emphasis></entry>
1541
1542 <entry>956214</entry>
1543
1544 <entry>956214</entry>
1545
1546 <entry>956214</entry>
1547
1548 <entry>956214</entry>
1549 </row>
1550
1551 <row>
1552 <entry role="bold"><emphasis
1553 role="bold">1500</emphasis></entry>
1554
1555 <entry>797174</entry>
1556
1557 <entry>797174</entry>
1558
1559 <entry>797174</entry>
1560
1561 <entry>797174</entry>
1562 </row>
1563 </tbody>
1564 </tgroup>
1565 </table><table>
1566 <title>Results in termination mode</title>
1567
1568 <tgroup cols="3">
1569 <tbody>
1570 <row>
1571 <entry align="center"><emphasis
1572 role="bold">Bytes</emphasis></entry>
1573
1574 <entry align="center"><emphasis role="bold">VM1 pktgen pps
1575 TX</emphasis></entry>
1576
1577 <entry align="center"><emphasis role="bold">VM2 testpmd
1578 RX</emphasis></entry>
1579 </row>
1580
1581 <row>
1582 <entry role="bold"><emphasis
1583 role="bold">64</emphasis></entry>
1584
1585 <entry>14204580</entry>
1586
1587 <entry>14205063</entry>
1588 </row>
1589
1590 <row>
1591 <entry><emphasis role="bold">128</emphasis></entry>
1592
1593 <entry>8424611</entry>
1594
1595 <entry>8424611</entry>
1596 </row>
1597
1598 <row>
1599 <entry role="bold"><emphasis
1600 role="bold">256</emphasis></entry>
1601
1602 <entry>4529024</entry>
1603
1604 <entry>4529024</entry>
1605 </row>
1606
1607 <row>
1608 <entry><emphasis role="bold">512</emphasis></entry>
1609
1610 <entry>2348640</entry>
1611
1612 <entry>2348640</entry>
1613 </row>
1614
1615 <row>
1616 <entry><emphasis role="bold">1024</emphasis></entry>
1617
1618 <entry>1197101</entry>
1619
1620 <entry>1197101</entry>
1621 </row>
1622
1623 <row>
1624 <entry><emphasis role="bold">1500</emphasis></entry>
1625
1626 <entry>822244</entry>
1627
1628 <entry>822244</entry>
1629 </row>
1630 </tbody>
1631 </tgroup>
1632 </table></para>
1633 </section>
1634 </section>
1635 </section>
1636 </section>
1637</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/book.xml b/doc/book-enea-nfv-access-guide/doc/book.xml
new file mode 100644
index 0000000..f23213c
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/book.xml
@@ -0,0 +1,30 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd" [
4]>
5<book id="book_enea_nfv_access_guide">
6 <title><trademark class="registered">Enea</trademark> NFV Access Guide</title>
7 <subtitle>Release Version
8 <xi:include href="eltf_params_updated.xml" xpointer="element(EneaLinux_REL_VER/1)"
9 xmlns:xi="http://www.w3.org/2001/XInclude" /></subtitle>
10 <xi:include href="../../s_docbuild/template/docsrc_common/bookinfo_userdoc.xml"
11 xmlns:xi="http://www.w3.org/2001/XInclude" />
12 <xi:include href="overview.xml"
13 xmlns:xi="http://www.w3.org/2001/XInclude" />
14 <xi:include href="getting_started.xml"
15 xmlns:xi="http://www.w3.org/2001/XInclude" />
16 <xi:include href="hypervisor_virtualization.xml"
17 xmlns:xi="http://www.w3.org/2001/XInclude" />
18 <xi:include href="container_virtualization.xml"
19 xmlns:xi="http://www.w3.org/2001/XInclude" />
20 <xi:include href="ovs.xml"
21 xmlns:xi="http://www.w3.org/2001/XInclude" />
22 <xi:include href="dpdk.xml"
23 xmlns:xi="http://www.w3.org/2001/XInclude" />
24 <xi:include href="benchmarks.xml"
25 xmlns:xi="http://www.w3.org/2001/XInclude" />
26 <xi:include href="using_nfv_access_sdks.xml"
27 xmlns:xi="http://www.w3.org/2001/XInclude" />
28 <xi:include href="../../s_docbuild/template/docsrc_common/contacting_enea_enea_linux.xml"
29 xmlns:xi="http://www.w3.org/2001/XInclude" />
30</book>
diff --git a/doc/book-enea-nfv-access-guide/doc/container_virtualization.xml b/doc/book-enea-nfv-access-guide/doc/container_virtualization.xml
new file mode 100644
index 0000000..58133ae
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/container_virtualization.xml
@@ -0,0 +1,136 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="container-virtualization">
5 <title>Container Virtualization</title>
6
7 <section id="docker">
8 <title>Docker</title>
9
10 <para>Docker is an open-source project that automates the deployment of
11 applications inside software containers, by providing an additional layer
12 of abstraction and automation of operating-system-level virtualization on
13 Linux.</para>
14
15 <para>The software container mechanism uses resource isolation features
16 inside the Linux kernel, such as cgroups and kernel namespaces to allow
17 multiple containers to run within a single Linux instance, avoiding the
18 overhead of starting and maintaining virtual machines.</para>
19
20 <para>Containers are lightweight and include everything needed to run
21 themselves: code, runtime, system tools, system libraries and settings.
22 The main advantage provided by containers is that the encapsulated
23 software is isolated from its surroundings. For example, differences
24 between development and staging environments can be kept separate in order
25 to reduce conflicts between teams running different software on the same
26 infrastructure.</para>
27
28 <para>For a better understanding of what Docker is and how it works, the
29 official documentation provided on the Docker website should be consulted:
30 <ulink
31 url="https://docs.docker.com/">https://docs.docker.com/</ulink>.</para>
32
33 <section id="launch-docker-container">
34 <title>Launching a Docker container</title>
35
36 <para>Docker provides a hello-world container which checks whether your
37 system is running the daemon correctly. This container can be launched
38 by simply running:</para>
39
40 <programlisting>docker run hello-world</programlisting>
41
42 <para>If your installation is working correctly, the following message
43 should be outputted:<programlisting>Hello from Docker!</programlisting></para>
44 </section>
45
46 <section id="run-enfv-guest-image">
47 <title>Run an Enea NFV Access guest image</title>
48
49 <para>Enea NFV Access guest images can run inside Docker as any
50 other container can. Before starting an Enea NFV Access guest
51 image, a root filesystem has to be imported in Docker:</para>
52
53 <programlisting>docker import enea-linux-virtualization-guest-qemux86-64.tar.gz el7guest</programlisting>
54
55 <para>To check that the Docker image has been imported successfully,
56 run:</para>
57
58 <programlisting>docker images</programlisting>
59
60 <para>Finally, start an Enea NFV Access container with
61 <literal>bash</literal> running as the shell, by running:</para>
62
63 <programlisting>docker run -it el7guest /bin/bash</programlisting>
64 </section>
65
66 <section id="attach-ext-resources-docker-containers">
67 <title>Attach external resources to Docker containers</title>
68
69 <para>Any system resource present on the host machine can be attached or
70 accessed by a Docker container.</para>
71
72 <para>Typically, if a file or folder on the host machine needs to be
73 attached to a container, that container should be launched with the
74 <literal>-v</literal> parameter. For example, to attach the
75 <literal>roots</literal> home folder to a container, the command line
76 for Docker should have the following format:</para>
77
78 <programlisting>docker run -it -v /home/root:/home/host_root/ el7guest /bin/bash</programlisting>
79
80 <para>To check that folders have been properly passed from the host to
81 the container, create a file in the source folder on the host root
82 filesystem and check for its existence inside the containers destination
83 location.</para>
84
85 <section id="attach-vhost-descriptors">
86 <title>Attach vhost file descriptors</title>
87
88 <para>If OVS is running on the host and vhost file descriptors need to
89 be passed to the container, this can be done by either mapping the
90 folder where all the file descriptors are located or mapping the file
91 descriptor itself:</para>
92
93 <itemizedlist>
94 <listitem>
95 <para>Mapping the folder can be done as exemplified above:</para>
96
97 <programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ el7guest /bin/bash</programlisting>
98 </listitem>
99
100 <listitem>
101 <para>Mapping a file descriptor is done in a similar way, but the
102 <literal>-v</literal> flag needs to point directly to it:</para>
103
104 <programlisting>docker run -it --rm -v /var/run/openvswitch/vhost-user1 el7guest /bin/bash</programlisting>
105 </listitem>
106 </itemizedlist>
107 </section>
108
109 <section id="attach-hugepages-mount-folders">
110 <title>Attach hugepages mount folders</title>
111
112 <para>Hugepages mount folders can also be accessed by a container
113 similarly to how a plain folder is mapped, as shown in 1.3.</para>
114
115 <para>For example, if the host system has hugepages mounted in the
116 <literal>/mnt/huge</literal> location, a container can also access
117 hugepages by being launched with:</para>
118
119 <programlisting>docker run -it -v /mnt/huge el7guest /bin/bash</programlisting>
120 </section>
121
122 <section id="access-pci-bus">
123 <title>Access the PCI bus</title>
124
125 <para>If the host machine has multiple SRIOV instances created, a
126 container can access the instances by being given privileged access to
127 the host system. Unlike folders, PCI devices do not have to be mounted
128 explicitly in order to be accessed and will be available to the
129 container if the <literal>--privileged</literal> flag is passed to the
130 command line:</para>
131
132 <programlisting>docker run --privileged -it el7guest /bin/bash</programlisting>
133 </section>
134 </section>
135 </section>
136</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/dpdk.xml b/doc/book-enea-nfv-access-guide/doc/dpdk.xml
new file mode 100644
index 0000000..c9746a4
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/dpdk.xml
@@ -0,0 +1,119 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="dpdk">
5 <title>Data Plane Development Kit</title>
6
7 <para>The Intel Data Plane Development Kit (DPDK) is a set of user-space
8 libraries and drivers that provides a programming framework for high-speed
9 packet processing applications. The DPDK includes a number of Poll Mode
10 Drivers that enable direct packet transfer between the physical NIC and
11 user-space without using interrupts, bypassing the Linux kernel network
12 stack entirely.</para>
13
14 <para>In order to take advantage of DPDK, Linux <ulink
15 url="https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt">huge
16 pages</ulink> must be enabled in the system. The allocation of huge pages
17 should preferably be done at boot time by passing parameters on the kernel
18 command line. Add the following to the kernel boot parameters:</para>
19
20 <programlisting>default_hugepagesz=1GB hugepagesz=1GB hugepages=8 hugepagesz=2M hugepages=2048</programlisting>
21
22 <para>For DPDK documentation, see <ulink
23 url="http://dpdk.org/doc/guides-17.02/index.html">http://dpdk.org/doc/guides-17.02/index.html</ulink></para>
24
25 <section id="pktgen">
26 <title>Pktgen</title>
27
28 <para>In addition to DPDK, Enea NFV Access includes Pktgen, a
29 software traffic generator that is powered by the DPDK packet processing
30 framework. Pktgen can act as a transmitter or receiver and is capable of
31 generating 10Gbit wire rate traffic with 64 byte frames.</para>
32
33 <para>Pktgen is installed in <literal>/usr/share/apps/pktgen/</literal>
34 and needs to be executed from this directory.</para>
35
36 <para>For Pktgen documentation, see <ulink
37 url="http://pktgen-dpdk.readthedocs.io">http://pktgen-dpdk.readthedocs.io</ulink></para>
38 </section>
39
40 <section id="dpdk-setup">
41 <title>DPDK setup instructions</title>
42
43 <para>The following setup instructions apply to both host and
44 guest.</para>
45
46 <orderedlist>
47 <listitem>
48 <para>To make the hugepage memory available for DPDK, it must be
49 mounted:</para>
50
51 <programlisting>mkdir /mnt/huge
52mount -t hugetlbfs nodev /mnt/huge</programlisting>
53 </listitem>
54
55 <listitem>
56 <para>Load the DPDK igb_uio kernel module:</para>
57
58 <programlisting>modprobe igb_uio</programlisting>
59 </listitem>
60
61 <listitem>
62 <para>Bind the device to the igb_uio driver:</para>
63
64 <para><programlisting>dpdk-devbind --bind=igb_uio &lt;PCI device number&gt;</programlisting>The
65 DPDK provides the dpdk-devbind tool to help binding/unbinding devices
66 from specific drivers. See <ulink
67 url="http://dpdk.org/doc/guides-17.02/tools/devbind.html">http://dpdk.org/doc/guides-17.02/tools/devbind.html</ulink>
68 for more information.</para>
69 </listitem>
70 </orderedlist>
71
72 <para>To print the current status of all known network
73 interfaces:<programlisting>dpdk-devbind --status</programlisting></para>
74
75 <para>At this point the system is ready to run DPDK applications.</para>
76 </section>
77
78 <section id="dpdk-example-test-setup">
79 <title>DPDK example test setup</title>
80
81 <para>This is a simple DPDK test setup using two boards connected
82 back-to-back. One board generates traffic using the Pktgen application,
83 and the other board runs the DPDK testpmd example to forward packets back
84 on the same interface.</para>
85
86 <programlisting>Pktgen [DPDK] - Board 1 PHY &lt;--&gt; Board 2 PHY - [DPDK] testpmd</programlisting>
87
88 <orderedlist>
89 <listitem>
90 <para>Setup DPDK on both boards, following the instructions in
91 [FIXME]:</para>
92 </listitem>
93
94 <listitem>
95 <para>On board 1, start the Pktgen application:</para>
96
97 <programlisting>cd /usr/share/apps/pktgen/
98./pktgen -c 0x7 -n 4 --socket-mem 1024 -- -P -m "[1:2].0"</programlisting>
99
100 <para>In the Pktgen console, run:</para>
101
102 <programlisting>start 0</programlisting>
103
104 <para>The Pktgen output will display the traffic configuration and
105 statistics.</para>
106 </listitem>
107
108 <listitem>
109 <para>On board 2, start the testpmd application:</para>
110
111 <programlisting>testpmd -c 0x7 -n 4 -- --txd=512 --rxd=512 --port-topology=chained</programlisting>
112
113 <para>For more information, refer to the testpmd application user
114 guide: <ulink
115 url="http://dpdk.org/doc/guides-17.02/testpmd_app_ug/index.html">http://dpdk.org/doc/guides-17.02/testpmd_app_ug/index.html</ulink>.</para>
116 </listitem>
117 </orderedlist>
118 </section>
119</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/eltf_params_template.xml b/doc/book-enea-nfv-access-guide/doc/eltf_params_template.xml
new file mode 100644
index 0000000..278ad71
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/eltf_params_template.xml
@@ -0,0 +1,151 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<section id="eltf_created_params">
5 <title>File with Parameters in the Book Auto-updated by ELFT</title>
6
7 <note>
8 <para>See the <emphasis
9 role="bold">eltf_params_updated_template_howto_use.txt</emphasis> text
10 file for description of how to create the final <emphasis
11 role="bold">eltf_params_updated.xml</emphasis> from this template and for
12 all <emphasis role="bold">REQUIREMENTS</emphasis>. Use the command
13 "<emphasis role="bold">make eltf</emphasis>" to extract a full list of all
14 ELTF variables, which always begins with ELTF_ and don't only rely on the
15 howto text file list! The plan is that ELTF will auto-update this when
16 needed.</para>
17 </note>
18
19 <section id="host_prereq">
20 <title>Common Parameters</title>
21
22 <bridgehead>A programlisting, ID
23 "eltf-prereq-apt-get-commands-host"</bridgehead>
24
25 <para id="eltf-prereq-apt-get-commands-host"><programlisting>ELTF_PL_HOST_PREREQ</programlisting></para>
26
27 <bridgehead>A programlisting, ID
28 "eltf-getting-repo-install-command"</bridgehead>
29
30 <para id="eltf-getting-repo-install-command"><programlisting>ELTF_PL_GET_REPO</programlisting></para>
31
32 <bridgehead>Several phrase elements, various IDs. Ensure EL_REL_VER is
33 correct also compared to the "previous" REL VER in pardoc-distro.xml
34 "prev_baseline".</bridgehead>
35
36 <para id="EneaLinux_REL_VER"><phrase>ELTF_EL_REL_VER</phrase></para>
37
38 <para id="Yocto_VER"><phrase>ELTF_YOCTO_VER</phrase></para>
39
40 <para id="Yocto_NAME"><phrase>ELTF_YOCTO_NAME</phrase></para>
41
42 <para id="ULINK_YOCTO_PROJECT_DOWNLOAD"><ulink
43 url="ELTF_YOCTO_PROJ_DOWNLOAD_URL">ELTF_YOCTO_PROJ_DOWNLOAD_TXTURL</ulink></para>
44
45 <para id="ULINK_ENEA_LINUX_URL"><ulink
46 url="ELTF_EL_DOWNLOAD_URL">ELTF_EL_DOWNLOAD_TXTURL</ulink></para>
47
48 <bridgehead>A programlisting, ID "eltf-repo-cloning-enea-linux". Use
49 $MACHINE/default.xml as parameter, where MACHINE is one of the target
50 directory names in the manifest.</bridgehead>
51
52 <para id="eltf-repo-cloning-enea-linux"><programlisting>ELTF_PL_CLONE_W_REPO</programlisting></para>
53
54 <bridgehead>A table with ONE row, only the row with ID
55 "eltf-eclipse-version-row" is included in the book. MANUALLY BOTH in the
56 template.xml and in the updated.xml, set condition hidden on the
57 &lt;row&gt;, if eclipse is not in the release.</bridgehead>
58
59 <informaltable>
60 <tgroup cols="1">
61 <tbody>
62 <row id="eltf-eclipse-version-row">
63 <entry>Eclipse version ELTF_ECLIPSE_VERSION plus command line
64 development tools are included in this Enea Linux release.</entry>
65 </row>
66 </tbody>
67 </tgroup>
68 </informaltable>
69
70 <bridgehead>Below is one big section with title "Supported Targets with
71 Parameters". The entire section is included completely in the book via ID
72 "eltf-target-tables-section" and shall be LAST in the template. The
73 template contains ONE target subsection. COPY/APPEND it, if multiple
74 targets exist in the release and optionally add rows with additional
75 target parameters in each target subsection table.</bridgehead>
76 </section>
77
78 <section id="eltf-target-tables-section">
79 <title>Supported Targets with Parameters</title>
80
81 <para>The tables below describes the target(s) supported in this Enea
82 Linux release.</para>
83
84 <section id="eltf-target-table-ELTF_T_MANIFEST_DIR">
85 <title>MACHINE ELTF_T_MANIFEST_DIR - Information</title>
86
87 <para><informaltable>
88 <tgroup cols="2">
89 <colspec colwidth="6*" />
90
91 <colspec colwidth="9*" />
92
93 <tbody>
94 <row>
95 <entry>Target official name</entry>
96
97 <entry>ELTF_T_NAME</entry>
98 </row>
99
100 <row>
101 <entry>Architecture and Description</entry>
102
103 <entry>ELTF_T_ARC_DESC</entry>
104 </row>
105
106 <row>
107 <entry>Link to target datasheet</entry>
108
109 <entry>See <ulink
110 url="ELTF_T_DS_URL">ELTF_T_DS_TXTURL</ulink></entry>
111 </row>
112
113 <row>
114 <entry>Poky version</entry>
115
116 <entry>ELTF_T_POKY_VER</entry>
117 </row>
118
119 <row>
120 <entry>GCC version</entry>
121
122 <entry>ELTF_T_GCC_VER</entry>
123 </row>
124
125 <row>
126 <entry>Linux Kernel Version</entry>
127
128 <entry>ELTF_T_KERN_VER</entry>
129 </row>
130
131 <row>
132 <entry>Supported Drivers</entry>
133
134 <entry>ELTF_T_DRIVERS</entry>
135 </row>
136
137 <row>
138 <entry>Enea rpm folder for downloading RPM packages for this
139 target</entry>
140
141 <entry><ulink
142 url="ELTF_T_EL_RPM_URL">ELTF_T_EL_RPM_TXTURL</ulink></entry>
143 </row>
144 </tbody>
145 </tgroup>
146 </informaltable></para>
147 </section>
148
149 <!-- ELTFADD_MORE_TARGET_SECTIONS_BELOW_IF_NEEDED -->
150 </section>
151</section> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/eltf_params_updated.xml b/doc/book-enea-nfv-access-guide/doc/eltf_params_updated.xml
new file mode 100644
index 0000000..31a251d
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/eltf_params_updated.xml
@@ -0,0 +1,165 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<section id="eltf_created_params">
5 <title>File with Parameters in the Book Auto-updated by ELFT</title>
6
7 <note>
8 <para>See the <emphasis
9 role="bold">eltf_params_updated_template_howto_use.txt</emphasis> text
10 file for description of how to create the final <emphasis
11 role="bold">eltf_params_updated.xml</emphasis> from this template and for
12 all <emphasis role="bold">REQUIREMENTS</emphasis>. Use the command
13 "<emphasis role="bold">make eltf</emphasis>" to extract a full list of all
14 ELTF variables, which always begins with ELTF_ and don't only rely on the
15 howto text file list! The plan is that ELTF will auto-update this when
16 needed.</para>
17 </note>
18
19 <section id="host_prereq">
20 <title>Common Parameters</title>
21
22 <bridgehead>A programlisting, ID
23 "eltf-prereq-apt-get-commands-host"</bridgehead>
24
25 <para id="eltf-prereq-apt-get-commands-host"><programlisting># Host Ubuntu 14.04.5 LTS 64bit
26sudo apt-get -y update
27sudo apt-get -y install sed wget subversion git-core coreutils unzip texi2html \
28 texinfo libsdl1.2-dev docbook-utils fop gawk python-pysqlite2 diffstat \
29 make gcc build-essential xsltproc g++ desktop-file-utils chrpath \
30 libgl1-mesa-dev libglu1-mesa-dev autoconf automake groff libtool xterm \
31 libxml-parser-perl</programlisting></para>
32
33 <bridgehead>A programlisting, ID
34 "eltf-getting-repo-install-command"</bridgehead>
35
36 <para id="eltf-getting-repo-install-command"><programlisting>mkdir -p ~/bin
37curl https://storage.googleapis.com/git-repo-downloads/repo &gt; ~/bin/repo
38chmod a+x ~/bin/repo
39export PATH=~/bin:$PATH</programlisting></para>
40
41 <bridgehead>Several phrase elements, various IDs. Ensure EL_REL_VER is
42 correct also compared to the "previous" REL VER in pardoc-distro.xml
43 "prev_baseline".</bridgehead>
44
45 <para id="EneaLinux_REL_VER"><phrase>1.0</phrase></para>
46
47 <para id="Yocto_VER"><phrase>2.1</phrase></para>
48
49 <para id="Yocto_NAME"><phrase>krogoth</phrase></para>
50
51 <para id="ULINK_YOCTO_PROJECT_DOWNLOAD"><ulink
52 url="http://www.yoctoproject.org/downloads/core/krogoth/21">http://www.yoctoproject.org/downloads/core/krogoth/21</ulink></para>
53
54 <para id="ULINK_ENEA_LINUX_URL"><ulink
55 url="https://linux.enea.com/6">https://linux.enea.com/6</ulink></para>
56
57 <bridgehead>A programlisting, ID "eltf-repo-cloning-enea-linux". Use
58 $MACHINE/default.xml as parameter, where MACHINE is one of the target
59 directory names in the manifest.</bridgehead>
60
61 <para id="eltf-repo-cloning-enea-linux"><programlisting>mkdir enea-linux
62cd enea-linux
63repo init -u git://git.enea.com/linux/el_manifests-networking.git \
64 -b refs/tags/EL6 -m $MACHINE/default.xml
65repo sync</programlisting></para>
66
67 <bridgehead>A table with ONE row, only the row with ID
68 "eltf-eclipse-version-row" is included in the book. MANUALLY in book, set
69 condition hidden if eclipse is not in the release. Do this both in
70 template.xml and updated.xml.</bridgehead>
71
72 <informaltable>
73 <tgroup cols="1">
74 <tbody>
75 <row condition="hidden" id="eltf-eclipse-version-row">
76 <entry>Eclipse version 4.3 (Mars) plus command line development
77 tools are included in this Enea Linux release.</entry>
78 </row>
79 </tbody>
80 </tgroup>
81 </informaltable>
82
83 <bridgehead>Below is one big section with title "Supported Targets with
84 Parameters". The entire section is included completely in the book via ID
85 "eltf-target-tables-section" and shall be LAST in the template. The
86 template contains ONE target subsection. COPY/APPEND it, if multiple
87 targets exist in the release and optionally add rows with additional
88 target parameters in each target subsection table.</bridgehead>
89 </section>
90
91 <section id="eltf-target-tables-section">
92 <title>Supported Targets with Parameters</title>
93
94 <para>The tables below describes the target(s) supported in this Enea
95 Linux release.</para>
96
97 <section id="eltf-target-table-p2041rdb">
98 <title>MACHINE p2041rdb - Information</title>
99
100 <para><informaltable>
101 <tgroup cols="2">
102 <colspec colwidth="6*" />
103
104 <colspec colwidth="9*" />
105
106 <tbody>
107 <row>
108 <entry>Target official name</entry>
109
110 <entry>P2041RDB</entry>
111 </row>
112
113 <row>
114 <entry>Architecture and Description</entry>
115
116 <entry>Power, e500mc</entry>
117 </row>
118
119 <row>
120 <entry>Link to target datasheet</entry>
121
122 <entry>See <ulink
123 url="http://www.nxp.com/products/microcontrollers-and-processors/power-architecture-processors/qoriq-power-architecture-processors/p2041-qoriq-reference-design-board:RDP2041BOARD">link
124 to NXP's datasheet</ulink></entry>
125 </row>
126
127 <row>
128 <entry>Poky version</entry>
129
130 <entry>Git-commit-id:
131 75ca53211488a3e268037a44ee2a7ac5c7181bd2</entry>
132 </row>
133
134 <row>
135 <entry>GCC version</entry>
136
137 <entry>5.3</entry>
138 </row>
139
140 <row>
141 <entry>Linux Kernel Version</entry>
142
143 <entry>3.12</entry>
144 </row>
145
146 <row>
147 <entry>Supported Drivers</entry>
148
149 <entry>Ethernet, I2C, SPI, PCI Express, USB, Flash,
150 SD/SDHC/SDXC, RTC</entry>
151 </row>
152
153 <row>
154 <entry>Enea rpm folder for downloading RPM packages for this
155 target</entry>
156
157 <entry><ulink
158 url="https://linux.enea.com/6/p2041rgb/rpm">https://linux.enea.com/6/p2041rgb/rpm</ulink></entry>
159 </row>
160 </tbody>
161 </tgroup>
162 </informaltable></para>
163 </section>
164 </section>
165</section> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/eltf_params_updated_template_how_to_use.txt b/doc/book-enea-nfv-access-guide/doc/eltf_params_updated_template_how_to_use.txt
new file mode 100644
index 0000000..7f1d3cb
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/eltf_params_updated_template_how_to_use.txt
@@ -0,0 +1,320 @@
1eltf_params_template_updated_howto_use.txt
2
3This is a way to collect all parameters for an Enea Linux release
4in one parameter file, easy to automatically update by ELTF regularly.
5
6NOTE: Both the release info AND the open source books use parameters from
7 here, but the XML file is inside the release info book directory.
8
9NOTE: The manifest_conf.mk, or overridden by the environment variable
10 MANIFESTHASH, contains the full tag (or hashvalue) for downloading
11 the manifest when the books are built. The list of target
12 directories are fetched from the manifest into the book.
13 The eltf_params_updates.xml can all the time contain
14 the final next complete tag e.g. refs/tags/EL6 or similar
15 in the ELTF_PL_CLONE_W_REPO parameter command lines.
16
17The ordinary book XML files use xi:include statements to include elements
18from this parameter file. The book XML files can thus be manually edited.
19Before editing, you must run "make init".
20Any other text in the template or updated.xml file, outside the parts that
21are included in the book, are not used but still all must be correct
22DocBook XML files.
23
24ELTF work:
25 template => ELTF replaces ALL ELTF_xxx variables => updated XML file
26 => push to git only if changed
27
28
29eltf_params_template.xml (in git)
30 File used by ELTF to autocreate/update the real parameter
31 file eltf_params_updated.xml.
32
33eltf_params_updated.xml (in git)
34 Real parameter file where ELTF has replaced all ELTF_xx variables with
35 strings, in several cases with multiline strings.
36 No spaces or linefeed allowed in beginning or end of the variable values!
37
38
39xi:include: Each parameter is xi:include'ed in various book files, using
40 the IDs existing in the parameter files.
41 In most cases the 1:st element inside an element with an ID is included
42 using a format like eltf-prereq-apt-get-commands-host/1.
43 In very few cases the element with the ID is included in the book, one
44 example is the target section which has an ID, but which contains
45 multiple subsections, one per target.
46 All IDs in a book must be unique.
47
48DocBook XML: All XML files must be correct DocBook XML files.
49
50Do NOT edit/save the real *updated.xml file with XMLmind to avoid changes
51 not done by ELTF. But it is OK to open the real file in XMLmind to
52 check that the format is correct.
53
54ELTF should autocreate a temporary "real" file but only replace
55 and push the eltf_params_updated.xml if it is changed.
56
57
58make eltf
59 This lists all ELTF_xxx variables and some rules how to treat them
60
61DocBook Format: All elements - rules:
62 Several strict generic XML rules apply for all strings:
63 1. No TABs allowed or any other control chr than "linefeed"
64 2. Only 7-bit ASCII
65 3. Any < > & must be converted to &lt; &gt; and &amp;
66 Similar for any other non-7-bit-ASCII but avoid those!
67 4. No leading spaces or linefeeds when replacing the ELTF_* variable
68 5. No trailing spaces or linefeeds when replacing the ELTF_* variable
69 6. Note: Keep existing spaces before/efter ELTF_* in a few cases.
70
71DocBook Format: <programlisting> - rules: ELTF*PL* variables
72 Several strict rules apply for the multiline string in programlisting
73 in addition to the general XML rules above:
74 7. Max line length < 80 char
75 8. Use backslash (\) to break longer lines
76 9. Use spaces (e.g. 4) to indent continuation lines in programlistings
77 10. No trailing spaces on any line
78 11. No spaces or linefeed immediately after leading <programlisting>
79 12. No spaces or linefeed before trailing </programlisting>
80
81DocBook Format: <ulink> - rules: ELTF_*URL* variables
82 13. ELTF_*URL and corresponding ELTF_*TXTURL shall be identical strings
83 14. Only if the URL is extremely long, the TXTURL can be a separate string
84
85Each target has one section with target parameters:
86 <section id="eltf-target-table-ELTF_T_MANIFEST_DIR">
87 <title>MACHINE ELTF_T_MANIFEST_DIR - Information</title>
88 ..... with many ELTF_ variables ....
89 </section>
90
91 15. If there is only one target. ELTF just replaces ELTF parameters
92
93 16. It there are multiple targets. ELTF copies the section and appends the
94 section the required number of times.
95 Each section ID will become unique: eltf-target-table-ELTF_T_MANIFEST_DIR
96 Each section title will become unique
97
98Tables with target parameters in each target section:
99 17. It is possible for ELTF to append more rows with one parameter each
100 to these tables, because the entire tables are included in the book
101
102Special - NOT YET READY DEFINED how to handle the optionally included
103 Eclipse and its version, but this is a first suggestion:
104 18. Just now ELTF can define ELFT_ECLIPSE_VERSION as a full string
105 with both version number and name,
106 19. MANUALLY if Eclipse is NOT included in the release,
107 the release manager should manually set condition="hidden" on
108 the entire section in the book XML about Eclipse
109
110
111
112BELOW WE TRY TO EXPLAIN EACH ELTF_* variable, but always check with make eltf
113if there are more new variables, missing in this description file.
114
115_____________________________________________________________________________
116ELTF_PL_HOST_PREREQ Multiline list of host prerequisites, e.g. commands
117 like sudo apt-get install xxxx or similar.
118 First line = comment with the complete host name!
119 It is possible to include multiple hosts by just
120 adding an empty line, comment with host name, etc.
121 xi:include eltf-prereq-apt-get-commands-host/1
122 This is a <programlisting>...</programlisting>
123 Example:
124# Host Ubuntu 14.04.5 LTS 64bit
125sudo apt-get update
126sudo apt-get install sed wget subversion git-core coreutils unzip texi2html \
127 texinfo libsdl1.2-dev docbook-utils fop gawk python-pysqlite2 diffstat \
128 make gcc build-essential xsltproc g++ desktop-file-utils chrpath \
129 libgl1-mesa-dev libglu1-mesa-dev autoconf automake groff libtool xterm \
130 libxml-parser-perl
131
132_____________________________________________________________________________
133ELTF_PL_GET_REPO Multiline commands to download the repo tool
134 xi:include eltf-getting-repo-install-command/1
135 This is a <programlisting>...</programlisting>
136 Example:
137mkdir -p ~/bin
138curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
139chmod a+x ~/bin/repo
140export PATH=~/bin:$PATH
141
142_____________________________________________________________________________
143ELTF_EL_REL_VER General parameter string: The version of this Enea
144 Linux release. Major version and optional .Minor
145 Typically created from MAJOR and MINOR in enea.conf
146 MINOR in enea.conf is empty or contains a dot+minor
147 xi_include EneaLinux_REL_VER/1
148 This is a <phrase>X.x</phrase> used in many places.
149 Examples:
1506
151 or
1526.1
153
154_____________________________________________________________________________
155ELTF_YOCTO_VER General parameter string: Yocto version, created
156 from DISTRO in poky.ent
157 xi:include Yocto_VER/1
158 This is a <phrase>X.x</phrase> used in many places.
159 Example:
1602.1
161
162_____________________________________________________________________________
163ELTF_YOCTO_NAME General parameter string: Yocto name (branch), created
164 from DISTRO_NAME_NO_CAP in poky.ent
165 xi:include Yocto_NAME/1
166 This is a <phrase>X.x</phrase> used in many places.
167 Example:
168krogoth
169
170_____________________________________________________________________________
171ELTF_YOCTO_PROJ_DOWNLOAD_TXTURL General parameters. These two are IDENTICAL
172ELTF_YOCTO_PROJ_DOWNLOAD_URL strings with correct Yocto version string
173 at the end, typically without "dot".
174 xi:include ULINK_YOCTO_PROJECT_DOWNLOAD/1
175 This is an <ulink url="...">...</ulink>
176 Example:
177http://www.yoctoproject.org/downloads/core/krogoth/21
178
179_____________________________________________________________________________
180ELTF_EL_DOWNLOAD_TXTURL General parameters. These two are IDENTICAL strings
181ELTF_EL_DOWNLOAD_URL and shall be the http:/..... address where
182 Enea Linux can be downloaded
183 Often containing same version as in ELTF_EL_REL_VER
184 xi:include ULINK_ENEA_LINUX_URL/1
185 This is an <ulink url="...">...</ulink>
186 Example:
187http://linux.enea.com/6
188
189_____________________________________________________________________________
190ELTF_PL_CLONE_W_REPO Multiline commands to run repo to clone everything.
191 Use the variable $MACHINE/default.xml (the text in
192 the book will list the avaiable values of MACHINE,
193 taken from the manifest repository)
194 xi:include eltf-repo-cloning-enea-linux/1
195 This is a <programlisting>...</programlisting>
196 Example:
197mkdir enea-linux
198cd enea-linux
199repo init -u git://git.enea.com/linux/el_manifests-standard.git \
200 -b refs/tags/EL6 -m $MACHINE/default.xml
201repo sync
202
203_____________________________________________________________________________
204ELTF_ECLIPSE_VERSION Optional general parameter string.
205 NOT YET READY DEFINED
206 Just now a release manage must manually set
207 condition="hidden" on the Eclipse section,
208 if Eclipse is not included in the release.
209 ELTF just replaces ELTF_ECLIPSE_VERSION with a full
210 string with "X.Y (name)"
211 It includes the ID and can only be ONCE in the book.
212 xi:include eltf-eclipse-version-row
213 Example.
2144.5 (Mars)
215
216
217_____________________________________________________________________________
218ELTF_T_* All these are in each target (MACHINE) and ELTF
219 must separately replace them with strings for
220 each target
221 NOTE: All (except the MANIFEST_DIR) are in rows
222 in a table and ELTF can select to append
223 more parameters by adding more rows
224
225_____________________________________________________________________________
226ELTF_T_MANIFEST_DIR This happens to be in two places. Must be exactly
227ELTF_T_MANIFEST_DIR the directory name in the manifest, e.g. same
228 as the MACHINE names in $MACHINE/default.xml.
229 In book: a) Part of section ID
230 b) Part of section title
231 Examples:
232p2041rgb
233 or
234ls1021aiot
235 or
236qemuarm
237
238_____________________________________________________________________________
239ELTF_T_NAME Target specific: "Target Official Name"
240 NOT same as the target directory name in most cases.
241 In book: An <entry> element in a row
242 Examples:
243P2041RGB
244 or
245LS1021a-IoT
246 or
247qemuarm
248
249_____________________________________________________________________________
250ELTF_T_ARC_DESC Target specific: "Architecture and Description"
251 It can be a short identification string or
252 it can be a longer descriptive sentence.
253 In book: An <entry> element in a row
254 Examples:
255Power, e500mc
256 or
257ARM Cortex-A7
258
259_____________________________________________________________________________
260ELTF_T_DS_TXTURL Target specific: "Link to target datasheet. These
261ELTF_T_DS_URL two usually are IDENTICAL strings with correct
262 hyperlink to the target's official datasheet.
263 In book: an <ulink url="...">...</ulink>
264 Only if the link is VERY LONG, the text part shall
265 instead be a descriptive string (see 2:nd example).
266 NOTE: Also here no spaces or line-feeds!
267 Examples:
268url="http://wiki.qemu.org">http://wiki.qemu.org
269or
270url="http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors/qoriq-ls1021a-iot-gateway-reference-design:LS1021A-IoT">link to NXP's datasheet
271
272_____________________________________________________________________________
273ELTF_T_POKY_VER Target specific: "Poky version" created either
274 from POKYVERSION in poky.ent
275 or using a hashvalue with a leading string, in
276 which case it may be different per target.
277 In book: An <entry> in a row
278 Examples:
27915.0.0
280or
281Git commit id: 75ca53211488a3e268037a44ee2a7ac5c7181bd2
282
283_____________________________________________________________________________
284ELTF_T_GCC_VER Target specific: "GCC Version". Should be in poky
285 but not easy to find among various parameters.
286 ELTF would extract it from build logs building SDK
287 and it is possibly different per target.
288 In book: An <entry> in a row
289 Example:
2905.3
291
292_____________________________________________________________________________
293ELTF_T_KERN_VER Target specific: "Linux Kernel Version". Often
294 different per target.
295 In book: An <entry> in a row
296 Example:
2973.12
298
299_____________________________________________________________________________
300ELTF_T_DRIVERS Target specific: "Supported Drivers". This is a
301 comma-separated list of driver names.
302 ELTF should create the list in same order for each
303 target, e.g. alphabetic migth be OK.
304 In book: An <entry> in a row
305 Example:
306Ethernet, I2C, SPI, PCI, USB, SD/SDHC/SDXC
307
308
309_____________________________________________________________________________
310ELTF_T_EL_RPM_TXTURL Target specific: "Enea rpm folder for downloading
311ELTF_T_EL_RPM_URL RPM packages for this target". These two are
312 INDENTICAL strings with hyperlink to the web site
313 at Enea where the customer can download RPMs
314 Note: Often the ELFT_EL_REL_VER value and
315 the ELTF_T_MANIFEST_DIR are used in the link.
316 In book: an <ulink url="...">...</ulink>
317 Example:
318url="https://linux.enea.com/6/ls1021aiot/rpm">https://linux.enea.com/6/ls1021aiot/rpm
319
320_____________________________________________________________________________
diff --git a/doc/book-enea-nfv-access-guide/doc/getting_started.xml b/doc/book-enea-nfv-access-guide/doc/getting_started.xml
new file mode 100644
index 0000000..524741a
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/getting_started.xml
@@ -0,0 +1,340 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="plat-release-content">
5 <title>Getting Started with ENFV Access</title>
6
7 <section id="release-content">
8 <title>NFV Access Release content</title>
9
10 <para>The NFV Access 1.0 Release contains along with other items,
11 documentation, pre-built kernels and images, a bootloader and a
12 SDK.</para>
13
14 <para>The directories structure is detailed below:</para>
15
16 <programlisting>-- documentation/
17 /* NFV Access documentation */
18-- inteld1521/
19 /* artifacts for the host side */
20 -- deb/
21 /* deb packages */
22 -- images/
23 -- enea-image-virtualization-host
24 /* precompiled artifacts for the Host release image */
25 -- various artifacts
26 -- enea-image-virtualization-host-sdk
27 /* precompiled artifacts for the Host SDK image.
28 The SDK image contains userspace tools and kernel
29 configurations necessary for developing, debugging
30 and profiling applications and kernel modules */
31 -- various artifacts
32 -- sdk
33 /* NFV Access SDK for the host */
34 -- enea-glibc-x86_64-enea-image-virtualization-host-sdk /
35 -corei7-64-toolchain-7.0.sh
36 /* self-extracting archive installing
37 cross-compilation toolchain for the host */
38-- qemux86-64
39 /* artifacts for the guest side */
40 -- deb/
41 /* deb packages */
42 -- images/
43 -- enea-image-virtualization-guest
44 /* precompiled artifacts for the Guest image */
45 -- various artifacts
46 -- sdk
47 /* NFV Access SDK for the guest */
48 -- enea-glibc-x86_64-enea-image-virtualization-guest-sdk /
49 -core2-64-toolchain-7.0.sh
50 /* self-extracting archive installing cross-compilation
51 toolchain for the guest (QEMU x86-64) */
52</programlisting>
53
54 <para>For each combination of image and target, the following set of
55 artifacts is available:</para>
56
57 <programlisting>-- bzImage
58 /* kernel image */
59-- bzImage-&lt;target&gt;.bin
60 /* kernel image, same as above */
61-- config-&lt;target&gt;.config
62 /* kernel configuration file */
63-- core-image-minimal-initramfs-&lt;target&gt;.cpio.gz
64 /* cpio archive of the initramfs */
65-- core-image-minimal-initramfs-&lt;target&gt;.qemuboot.conf
66 /* qemu config file for the initramfs image */
67-- &lt;image-name&gt;-&lt;target&gt;.ext4
68 /* EXT4 image of the rootfs */
69-- &lt;image-name&gt;-&lt;target&gt;.hddimg
70 /* msdos filesystem containing syslinux, kernel, initrd and rootfs image */
71-- &lt;image-name&gt;-&lt;target&gt;.iso
72 /* CD .iso image */
73-- &lt;image-name&gt;-&lt;target&gt;.qemuboot.conf
74 /* qemu config file for the image */
75-- &lt;image-name&gt;-&lt;target&gt;.tar.gz
76 /* tar archive of the image */
77-- &lt;image-name&gt;-&lt;target&gt;.wic
78 /* Wic image */
79-- microcode.cpio
80 /* kernel microcode data */
81-- modules-&lt;target&gt;.tgz
82 /* external kernel modules */
83-- ovmf.*.qcow2
84 /* ovmf firmware for uefi support in qemu */
85-- rmc.db
86 /* Central RMC Database */
87-- systemd-bootx64.efi
88 /* systemd-boot EFI file */
89-- grub-efi-bootx64.efi
90 /* GRUB EFI file */</programlisting>
91 </section>
92
93 <section id="docs">
94 <title>Included Documention</title>
95
96 <para>Enea NFV Access is provided with the following set of
97 documents:</para>
98
99 <itemizedlist>
100 <listitem>
101 <para>Enea NFV Access Guide &ndash; A document describing the Enea NFV
102 Access release content and how to use it, as well as benchmark
103 results.</para>
104 </listitem>
105
106 <listitem>
107 <para>Enea NFV Access Open Source Report &ndash; A document containing
108 the open source and license information pertaining to packages
109 provided with Enea NFV Access 1.0.</para>
110 </listitem>
111
112 <listitem>
113 <para>Enea NFV Access Test Report &ndash; The document that summarizes
114 the test results for the Enea NFV Access release.</para>
115 </listitem>
116
117 <listitem>
118 <para>Enea NFV Access Security Report &ndash; The document that lists
119 all security fixes included in the Enea NFV Access 1.0 release.</para>
120 </listitem>
121 </itemizedlist>
122 </section>
123
124 <section id="prebuilt-artifacts">
125 <title>How to use the Prebuilt Artifacts</title>
126
127 <section id="boot-ramdisk">
128 <title>Booting Enea NFV Access using RAMDISK</title>
129
130 <para>There may be use cases, especially at first target ramp-up, where
131 the HDD/SDD has no partitions and you need to prepare the disks for
132 boot. Booting from ramdisk can help with this task.</para>
133
134 <para>The prerequisites needed to proceed:</para>
135
136 <itemizedlist>
137 <listitem>
138 <para>Enea Linux ext4 rootfs image -
139 enea-image-virtualization-host-inteld1521.ext4</para>
140 </listitem>
141
142 <listitem>
143 <para>Enea Linux kernel image - bzImage</para>
144 </listitem>
145
146 <listitem>
147 <para>BIOS has PXE boot enabled</para>
148 </listitem>
149
150 <listitem>
151 <para>PXE/tftp server configured and connected (ethernet) to
152 target.</para>
153 </listitem>
154 </itemizedlist>
155
156 <para>Copy bzImage and enea-image-virtualization-host-inteld1521.ext4.gz
157 images to the tftpserver configured for PXE boot.</para>
158
159 <para>Use the following as an example for the PXE configuration
160 file:</para>
161
162 <programlisting>default vesamenu.c32
163prompt 1
164timeout 0
165
166label el_ramfs
167 menu label ^EneaLinux_RAMfs
168 kernel bzImage
169 append root=/dev/ram0 initrd=enea-image-virtualization-host-inteld1521.ext4 /
170 ramdisk_size=1200000 console=ttyS0,115200 eralyprintk=ttyS0,115200</programlisting>
171
172 <para>Restart the target. Then enter (F11) in the Boot Menu and select
173 the Ethernet interface used for PXE boot. From the PXE Boot Menu select
174 <emphasis role="bold">Enea Linux_RAMfs</emphasis>. Once the Enea NFV
175 Access is started you can partition the HDD/SDD and install
176 GRUB as described in in the following section.</para>
177 </section>
178
179 <section id="install-grub">
180 <title>Partitioning a new harddisk and installing GRUB</title>
181
182 <para>The prerequisites needed:</para>
183
184 <itemizedlist>
185 <listitem>
186 <para>grub (<literal>grub-efi-bootx64.efi</literal>) - availalble as
187 a pre-built artifact under
188 <literal>inteld1521/images/enea-image-virtualization-host</literal>.</para>
189 </listitem>
190
191 <listitem>
192 <para><literal>e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb,/</literal></para>
193
194 <para><literal>dosfstools_4.1-r0.0_amd64.deb</literal> - available
195 under <literal>inteld1521/deb</literal>.</para>
196 </listitem>
197 </itemizedlist>
198
199 <para>Proceed using the following steps:</para>
200
201 <orderedlist>
202 <listitem>
203 <para>Boot target with Enea NFV Access from RAMDISK</para>
204 </listitem>
205
206 <listitem>
207 <para>Install prerequisite packages:</para>
208
209 <programlisting>&gt; dpkg -i e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb
210&gt; dpkg -i dosfstools_4.1-r0.0_amd64.deb</programlisting>
211 </listitem>
212
213 <listitem>
214 <para>Partition the disk:</para>
215
216 <programlisting>&gt; fdisk /dev/sda
217fdisk&gt; g {GPT partition type}
218fdisk&gt; n
219fdisk&gt; 1
220fdisk&gt; {default start part}
221fdisk&gt; +512M
222fdisk&gt; t
223fdisk&gt; 1 {ESP/EFI partition}
224fdisk&gt; n
225fdisk&gt; 2
226fdisk&gt; {default start part}
227fdisk&gt; +18G
228fdisk&gt; 3
229fdisk&gt; {default start part}
230fdisk&gt; +20G
231...
232fdisk&gt; 7
233fdisk&gt; {default start part}
234fdisk&gt; {default end end part}
235
236fdisk&gt; p {print partion table}
237fdisk&gt; w {write to disk}
238fdisk&gt; q</programlisting>
239 </listitem>
240
241 <listitem>
242 <para>Format the partitions:</para>
243
244 <programlisting>&gt; mkfs.fat -F32 -nEFI /dev/sda1
245&gt; mkfs.ext4 -LROOT /dev/sda2
246&gt; mkfs.ext4 -LROOT /dev/sda3
247&gt; mkfs.ext4 -LROOT /dev/sda4
248&gt; mkfs.ext4 -LROOT /dev/sda5
249&gt; mkfs.ext4 -LROOT /dev/sda6
250&gt; mkfs.ext4 -LROOT /dev/sda7</programlisting>
251 </listitem>
252
253 <listitem>
254 <para>Create a GRUB partition:</para>
255
256 <programlisting>&gt; mkdir /mnt/boot
257&gt; mount /dev/sda1 /mnt/boot
258&gt; mkdir -p /mnt/boot/EFI/boot
259
260&gt; cp grub-efi-bootx64.efi /mnt/boot/EFI/boot/bootx64.efi
261&gt; vi /mnt/boot/EFI/boot/grub.cfg
262default=1
263
264menuentry "Linux Reference Image" {
265 linux (hd0,gpt2)/boot/bzImage root=/dev/sda2 ip=dhcp
266}
267
268menuentry "Linux sda3" {
269 linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp
270}
271
272menuentry "Linux sda4" {
273 linux (hd0,gpt4)/boot/bzImage root=/dev/sda4 ip=dhcp
274}
275
276menuentry "Linux sda5" {
277 linux (hd0,gpt5)/boot/bzImage root=/dev/sda5 ip=dhcp
278}
279
280menuentry "Linux sda6" {
281 linux (hd0,gpt6)/boot/bzImage root=/dev/sda6 ip=dhcp
282}
283
284menuentry "Linux sda7" {
285 linux (hd0,gpt7)/boot/bzImage root=/dev/sda7 ip=dhcp
286}</programlisting>
287 </listitem>
288 </orderedlist>
289 </section>
290
291 <section id="boot-hdd">
292 <title>Installing and booting Enea NFV Access on the
293 harddisk</title>
294
295 <para>After partitioning the harddisk, boot Enea NFV Access
296 from RAMFS or from a reference image installed on one of the
297 partitions.</para>
298
299 <para>To install Enea NFV Access image on a partition follow these
300 steps:</para>
301
302 <orderedlist>
303 <listitem>
304 <para>Copy your image on target:</para>
305
306 <programlisting>server&gt; scp ./enea-image-virtualization-host-inteld1521.tar.gz /
307root@&lt;target_ip&gt;:/home/root/</programlisting>
308 </listitem>
309
310 <listitem>
311 <para>Extract image onto the desired partition:</para>
312
313 <programlisting>target&gt; mount /dev/sda3 /mnt/sda
314target&gt; tar -pzxf /home/root/enea-image-virtualization-host-inteld1521.tar.gz /
315-C /mnt/sda</programlisting>
316
317 <para>Alternately, you can do both steps in one command from the
318 server:</para>
319
320 <programlisting>server&gt; cat ./enea-image-virtualization-host-inteld1521.tar.gz | /
321ssh root@&lt;target_ip&gt; "cd /mnt/sda6; tar -zxf -"</programlisting>
322 </listitem>
323
324 <listitem>
325 <para>Reboot</para>
326 </listitem>
327
328 <listitem>
329 <para>From the GRUB menu select your partition</para>
330 </listitem>
331 </orderedlist>
332
333 <note>
334 <para>In order to change kernel boot parameters you need to mount the
335 GRUB partition (i.e. <literal>/dev/sda1</literal>) and change the
336 <literal>EFI/boot/grub.cfg</literal> file.</para>
337 </note>
338 </section>
339 </section>
340</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml
new file mode 100644
index 0000000..f7f186c
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml
@@ -0,0 +1,741 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="hypervisor_virt">
5 <title>Hypervisor Virtualization</title>
6
7 <para>The KVM, Kernel-based Virtual Machine, is a virtualization
8 infrastructure for the Linux kernel which turns it into a hypervisor. KVM
9 requires a processor with a hardware virtualization extension.</para>
10
11 <para>KVM uses QEMU, an open source machine emulator and virtualizer, to
12 virtualize a complete system. With KVM it is possible to run multiple guests
13 of a variety of operating systems, each with a complete set of virtualized
14 hardware.</para>
15
16 <section id="launch_virt_machine">
17 <title>Launching a Virtual Machine</title>
18
19 <para>QEMU can make use of KVM when running a target architecture that is
20 the same as the host architecture. For instance, when running
21 qemu-system-x86_64 on an x86-64 compatible processor (containing
22 virtualization extensions Intel VT or AMD-V), you can take advantage of
23 the KVM acceleration, giving you benefit for your host and your guest
24 system.</para>
25
26 <para>Enea Linux includes an optimizied version of QEMU with KVM-only
27 support. To use KVM pass<command> --enable-kvm</command> to QEMU.</para>
28
29 <para>The following is an example of starting a guest:</para>
30
31 <programlisting>taskset -c 0,1 qemu-system-x86_64 \
32-cpu host -M q35 -smp cores=2,sockets=1 \
33-vcpu 0,affinity=0 -vcpu 1,affinity=1 \
34-enable-kvm -nographic \
35-kernel bzImage \
36-drive file=enea-image-virtualization-guest-qemux86-64.ext4,if=virtio,format=raw \
37-append 'root=/dev/vda console=ttyS0,115200' \
38-m 4096 \
39-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
40-numa node,memdev=mem -mem-prealloc</programlisting>
41 </section>
42
43 <section id="qemu_boot">
44 <title>Main QEMU boot options</title>
45
46 <para>Below are detailed all the pertinent boot options for the QEMU
47 emulator:</para>
48
49 <itemizedlist>
50 <listitem>
51 <para>SMP - at least 2 cores should be enabled in order to isolate
52 application(s) running in virtual machine(s) on specific cores for
53 better performance.</para>
54
55 <programlisting>-smp cores=2,threads=1,sockets=1 \</programlisting>
56 </listitem>
57
58 <listitem>
59 <para>CPU affinity - associate virtual CPUs with physical CPUs and
60 optionally assign a default real time priority to the virtual CPU
61 process in the host kernel. This option allows you to start qemu vCPUs
62 on isolated physical CPUs.</para>
63
64 <programlisting>-vcpu 0,affinity=0 \</programlisting>
65 </listitem>
66
67 <listitem>
68 <para>Hugepages - KVM guests can be deployed with huge page memory
69 support in order to reduce memory consumption and improve performance,
70 by reducing CPU cache usage. By using huge pages for a KVM guest, less
71 memory is used for page tables and TLB (Translation Lookaside Buffer)
72 misses are reduced, thereby significantly increasing performance,
73 especially for memory-intensive situations.</para>
74
75 <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \</programlisting>
76 </listitem>
77
78 <listitem>
79 <para>Memory preallocation - preallocate huge pages at startup time
80 can improve performance but it may affect the qemu boot time.</para>
81
82 <programlisting>-mem-prealloc \</programlisting>
83 </listitem>
84
85 <listitem>
86 <para>Enable realtime characteristics - run qemu with realtime
87 features. While that mildly implies that "-realtime" alone might do
88 something, it's just an identifier for options that are partially
89 realtime. If you're running in a realtime or low latency environment,
90 you don't want your pages to be swapped out and mlock does that, thus
91 mlock=on. If you want VM density, then you may want swappable VMs,
92 thus mlock=off.</para>
93
94 <programlisting>-realtime mlock=on \</programlisting>
95 </listitem>
96 </itemizedlist>
97
98 <para>If the hardware does not have an IOMMU (known as "Intel VT-d" on
99 Intel-based machines and "AMD I/O Virtualization Technology" on AMD-based
100 machines), it will not be possible to assign devices in KVM.
101 Virtualization Technology features (VT-d, VT-x, etc.) must be enabled from
102 BIOS on the host target before starting a virtual machine.</para>
103 </section>
104
105 <section id="net_in_guest">
106 <title>Networking in guest</title>
107
108 <section id="vhost-user-support">
109 <title>Using vhost-user support</title>
110
111 <para>The goal of vhost-user is to implement a Virtio transport, staying
112 as close as possible to the vhost paradigm of using shared memory,
113 ioeventfds and irqfds. A UNIX domain socket based mechanism allows the
114 set up of resources used by a number of Vrings shared between two
115 userspace processes, which will be placed in shared memory.</para>
116
117 <para>To run QEMU with the vhost-user backend, you have to provide the
118 named UNIX domain socket which needs to be already opened by the
119 backend:</para>
120
121 <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
122-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \
123-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
124-device virtio-net-pci,netdev=mynet1,mac=52:54:00:00:00:01 \</programlisting>
125
126 <para>The vHost User standard uses a client-server model. The server
127 creates and manages the vHost User sockets and the client connects to
128 the sockets created by the server. It is recommended to use QEMU as
129 server so the vhost-user client can be restarted without affecting the
130 server, otherwise if the server side dies all clients need to be
131 restarted.</para>
132
133 <para>Using vhost-user in QEMU as server will offer the flexibility to
134 stop and start the virtual machine with no impact on virtual switch from
135 the host (vhost-user-client).</para>
136
137 <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1,server \</programlisting>
138 </section>
139
140 <section id="tap-interface">
141 <title>Using TAP Interfaces</title>
142
143 <para>QEMU can use TAP interfaces to provide full networking capability
144 for the guest OS:</para>
145
146 <programlisting>-netdev tap,id=net0,ifname=tap0,script=no,downscript=no \
147-device virtio-net-pci,netdev=net0,mac=22:EA:FB:A8:25:AE \</programlisting>
148 </section>
149
150 <section id="vfio-passthrough">
151 <title>VFIO passthrough VF (SR-IOV) to guest</title>
152
153 <para>KVM hypervisor support for attaching PCI devices on the host
154 system to guests. PCI passthrough allows guests to have exclusive access
155 to PCI devices for a range of tasks. PCI passthrough allows PCI devices
156 to appear and behave as if they were physically attached to the guest
157 operating system.</para>
158
159 <para>Preparing an Intel system for PCI passthrough:</para>
160
161 <itemizedlist>
162 <listitem>
163 <para>Enable the Intel VT-d extensions in BIOS</para>
164 </listitem>
165
166 <listitem>
167 <para>Activate Intel VT-d in the kernel by using
168 <literal>intel_iommu=on</literal> as a kernel boot parameter</para>
169 </listitem>
170
171 <listitem>
172 <para>Allow unsafe interrupts in case the system doesn't support
173 interrupt remapping. This can be done using
174 <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as a
175 boot kernel parameter.</para>
176 </listitem>
177 </itemizedlist>
178
179 <para>Create guest with direct passthrough via VFIO framework like
180 so:</para>
181
182 <programlisting>-device vfio-pci,host=0000:03:10.2 \</programlisting>
183
184 <para>On the host, one or more VirtualFunctions (VFs) must be created in
185 order to be allocated for a guest network to access, before starting
186 QEMU:</para>
187
188 <programlisting>$ echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
189$ modprobe vfio_pci
190$ dpdk-devbind.py --bind=vfio-pci 0000:03:10.2</programlisting>
191 </section>
192
193 <section id="multiqueue">
194 <title>Multi-queue</title>
195
196 <section id="qemu-multiqueue-support">
197 <title>QEMU multi queue support configuration</title>
198
199 <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \
200-netdev type=vhost-user,id=net0,chardev=char0,queues=2 \
201-device virtio-net-pci,netdev=net0,mac=22:EA:FB:A8:25:AE,mq=on,vectors=6
202where vectors is calculated as: 2 + 2 * queues number.</programlisting>
203 </section>
204
205 <section id="inside-guest">
206 <title>Inside guest</title>
207
208 <para>Linux kernel virtio-net driver (one queue is enabled by
209 default):</para>
210
211 <programlisting>$ ethtool -L combined 2 eth0
212DPDK Virtio PMD
213$ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting>
214
215 <para>For QEMU documentation please see: <ulink
216 url="https://qemu.weilnetz.de/doc/qemu-doc.html">https://qemu.weilnetz.de/doc/qemu-doc.html</ulink>.</para>
217 </section>
218 </section>
219 </section>
220
221 <section id="libvirt">
222 <title>Libvirt</title>
223
224 <para>One way to manage guests in Enea NFV Access is by using
225 <literal>libvirt</literal>. Libvirt is used in conjunction with a daemon
226 (<literal>libvirtd</literal>) and a command line utility (virsh) to manage
227 virtualized environments.</para>
228
229 <para>The libvirt library is a hypervisor-independent virtualization API
230 and toolkit that is able to interact with the virtualization capabilities
231 of a range of operating systems. Libvirt provides a common, generic and
232 stable layer to securely manage domains on a node. As nodes may be
233 remotely located, libvirt provides all methods required to provision,
234 create, modify, monitor, control, migrate and stop the domains, within the
235 limits of hypervisor support for these operations.</para>
236
237 <para>The libvirt daemon runs on the Enea NFV Access host. All tools built
238 on libvirt API connect to the daemon to request the desired operation, and
239 to collect information about the configuration and resources of the host
240 system and guests. <literal>virsh</literal> is a command line interface
241 tool for managing guests and the hypervisor. The virsh tool is built on
242 the libvirt management API.</para>
243
244 <para><emphasis role="bold">Major functionality provided by
245 libvirt</emphasis></para>
246
247 <para>The following is a summary from the libvirt <ulink
248 url="http://wiki.libvirt.org/page/FAQ#What_is_libvirt.3F">home
249 page</ulink> describing the major libvirt features:</para>
250
251 <itemizedlist>
252 <listitem>
253 <para><emphasis role="bold">VM management:</emphasis> Various domain
254 lifecycle operations such as start, stop, pause, save, restore, and
255 migrate. Hotplug operations for many device types including disk and
256 network interfaces, memory, and cpus.</para>
257 </listitem>
258
259 <listitem>
260 <para><emphasis role="bold">Remote machine support:</emphasis> All
261 libvirt functionality is accessible on any machine running the libvirt
262 daemon, including remote machines. A variety of network transports are
263 supported for connecting remotely, with the simplest being
264 <literal>SSH</literal>, which requires no extra explicit
265 configuration. For more information, see: <ulink
266 url="http://libvirt.org/remote.html">http://libvirt.org/remote.html</ulink>.</para>
267 </listitem>
268
269 <listitem>
270 <para><emphasis role="bold">Network interface management:</emphasis>
271 Any host running the libvirt daemon can be used to manage physical and
272 logical network interfaces. Enumerate existing interfaces, as well as
273 configure (and create) interfaces, bridges, vlans, and bond devices.
274 For more details see: <ulink
275 url="https://fedorahosted.org/netcf/">https://fedorahosted.org/netcf/</ulink>.</para>
276 </listitem>
277
278 <listitem>
279 <para><emphasis role="bold">Virtual NAT and Route based
280 networking:</emphasis> Any host running the libvirt daemon can manage
281 and create virtual networks. Libvirt virtual networks use firewall
282 rules to act as a router, providing VMs transparent access to the host
283 machines network. For more information, see: <ulink
284 url="http://libvirt.org/archnetwork.html">http://libvirt.org/archnetwork.html</ulink>.</para>
285 </listitem>
286
287 <listitem>
288 <para><emphasis role="bold">Storage management:</emphasis> Any host
289 running the libvirt daemon can be used to manage various types of
290 storage: create file images of various formats (raw, qcow2, etc.),
291 mount NFS shares, enumerate existing LVM volume groups, create new LVM
292 volume groups and logical volumes, partition raw disk devices, mount
293 iSCSI shares, and much more. For more details, see: <ulink
294 url="http://libvirt.org/storage.html">http://libvirt.org/storage.html</ulink>.</para>
295 </listitem>
296
297 <listitem>
298 <para><emphasis role="bold">Libvirt Configuration:</emphasis> A
299 properly running libvirt requires that the following elements be in
300 place:</para>
301
302 <itemizedlist>
303 <listitem>
304 <para>Configuration files, located in the directory
305 <literal>/etc/libvirt</literal>. They include the daemon's
306 configuration file <literal>libvirtd.conf</literal>, and
307 hypervisor-specific configuration files, like
308 <literal>qemu.conf</literal> for the QEMU.</para>
309 </listitem>
310
311 <listitem>
312 <para>A running libvirtd daemon. The daemon is started
313 automatically in Enea NFV Access host.</para>
314 </listitem>
315
316 <listitem>
317 <para>Configuration files for the libvirt domains, or guests, to
318 be managed by the KVM host. The specifics for guest domains shall
319 be defined in an XML file of a format specified at <ulink
320 url="http://libvirt.org/formatdomain.html">http://libvirt.org/formatdomain.html</ulink>.
321 XML formats for other structures are specified at <ulink type=""
322 url="http://libvirt.org/format.html">http://libvirt.org/format.html</ulink>.</para>
323 </listitem>
324 </itemizedlist>
325 </listitem>
326 </itemizedlist>
327
328 <section id="boot-kvm-guest">
329 <title>Booting a KVM Guest</title>
330
331 <para>There are several ways to boot a KVM guest. Here we describe how
332 to boot using a raw image. A direct kernel boot can be performed by
333 transferring the guest kernel and the file system files to the host and
334 specifying a <literal>&lt;kernel&gt;</literal> and an
335 <literal>&lt;initrd&gt;</literal> element inside the
336 <literal>&lt;os&gt;</literal> element of the guest XML file, as in the
337 following example:</para>
338
339 <programlisting>&lt;os&gt;
340 &lt;kernel&gt;bzImage&lt;/kernel&gt;
341&lt;/os&gt;
342&lt;devices&gt;
343 &lt;disk type='file' device='disk'&gt;
344 &lt;driver name='qemu' type='raw' cache='none'/&gt;
345 &lt;source file='enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
346 &lt;target dev='vda' bus='virtio'/&gt;
347 &lt;/disk&gt;
348&lt;/devices&gt;</programlisting>
349 </section>
350
351 <section id="start-guest">
352 <title>Starting a Guest</title>
353
354 <para>Command <command>virsh create</command> starts a guest:</para>
355
356 <programlisting>virsh create example-guest-x86.xml</programlisting>
357
358 <para>If further configurations are needed before the guest is reachable
359 through <literal>ssh</literal>, a console can be started using command
360 <command>virsh console</command>. The example below shows how to start a
361 console where kvm-example-guest is the name of the guest defined in the
362 guest XML file:</para>
363
364 <programlisting>virsh console kvm-example-guest</programlisting>
365
366 <para>This requires that the guest domain has a console configured in
367 the guest XML file:</para>
368
369 <programlisting>&lt;os&gt;
370 &lt;cmdline&gt;console=ttyS0,115200&lt;/cmdline&gt;
371&lt;/os&gt;
372&lt;devices&gt;
373 &lt;console type='pty'&gt;
374 &lt;target type='serial' port='0'/&gt;
375 &lt;/console&gt;
376&lt;/devices&gt;</programlisting>
377 </section>
378
379 <section id="isolation">
380 <title>Isolation</title>
381
382 <para>It may be desirable to isolate execution in a guest, to a specific
383 guest core. It might also be desirable to run a guest on a specific host
384 core.</para>
385
386 <para>To pin the virtual CPUs of the guest to specific cores, configure
387 the <literal>&lt;cputune&gt;</literal> contents as follows:</para>
388
389 <orderedlist>
390 <listitem>
391 <para>First explicitly state on which host core each guest core
392 shall run, by mapping <literal>vcpu</literal> to
393 <literal>cpuset</literal> in the <literal>&lt;vcpupin&gt;</literal>
394 tag.</para>
395 </listitem>
396
397 <listitem>
398 <para>In the <literal>&lt;cputune&gt;</literal> tag it is further
399 possible to specify on which CPU the emulator shall run by adding
400 the cpuset to the <literal>&lt;emulatorpin&gt;</literal> tag.</para>
401
402 <programlisting>&lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
403&lt;cputune&gt;
404 &lt;vcpupin vcpu='0' cpuset='2'/&gt;
405 &lt;vcpupin vcpu='1' cpuset='3'/&gt;
406 &lt;emulatorpin cpuset="2"/&gt;
407&lt;/cputune&gt;</programlisting>
408
409 <para><literal>libvirt</literal> will group all threads belonging to
410 a qemu instance into cgroups that will be created for that purpose.
411 It is possible to supply a base name for those cgroups using the
412 <literal>&lt;resource&gt;</literal> tag:</para>
413
414 <programlisting>&lt;resource&gt;
415 &lt;partition&gt;/rt&lt;/partition&gt;
416&lt;/resource&gt;</programlisting>
417 </listitem>
418 </orderedlist>
419 </section>
420
421 <section id="network-libvirt">
422 <title>Networking using libvirt</title>
423
424 <para>Command <command>virsh net-create</command> starts a network. If
425 any networks are listed in the guest XML file, those networks must be
426 started before the guest is started. As an example, if the network is
427 defined in a file named example-net.xml, it is started as
428 follows:</para>
429
430 <programlisting>virsh net-create example-net.xml
431&lt;network&gt;
432 &lt;name&gt;sriov&lt;/name&gt;
433 &lt;forward mode='hostdev' managed='yes'&gt;
434 &lt;pf dev='eno3'/&gt;
435 &lt;/forward&gt;
436&lt;/network&gt;</programlisting>
437
438 <para><literal>libvirt</literal> is a virtualization API that supports
439 virtual network creation. These networks can be connected to guests and
440 containers by referencing the network in the guest XML file. It is
441 possible to have a virtual network persistently running on the host by
442 starting the network with command <command>virsh net-define</command>
443 instead of the previously mentioned <command>virsh
444 net-create</command>.</para>
445
446 <para>An example for the sample network defined in
447 <literal>meta-vt/recipes-example/virt-example/files/example-net.xml</literal>:</para>
448
449 <programlisting>virsh net-define example-net.xml</programlisting>
450
451 <para>Command <command>virsh net-autostart</command> enables a
452 persistent network to start automatically when the libvirt daemon
453 starts:</para>
454
455 <programlisting>virsh net-autostart example-net</programlisting>
456
457 <para>Guest configuration file (xml) must be updated to access newly
458 created network like so:</para>
459
460 <programlisting> &lt;interface type='network'&gt;
461 &lt;source network='sriov'/&gt;
462 &lt;/interface&gt;</programlisting>
463
464 <para>The following presented here are a few modes of network access
465 from guest using <command>virsh</command>:</para>
466
467 <itemizedlist>
468 <listitem>
469 <para><emphasis role="bold">vhost-user interface</emphasis></para>
470
471 <para>See the Open vSwitch chapter on how to create vhost-user
472 interface using Open vSwitch. Currently there is no Open vSwitch
473 support for networks that are managed by libvirt (e.g. NAT). As of
474 now, only bridged networks are supported (those where the user has
475 to manually create the bridge).</para>
476
477 <programlisting> &lt;interface type='vhostuser'&gt;
478 &lt;mac address='00:00:00:00:00:01'/&gt;
479 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
480 &lt;model type='virtio'/&gt;
481 &lt;driver queues='1'&gt;
482 &lt;host mrg_rxbuf='off'/&gt;
483 &lt;/driver&gt;
484 &lt;/interface&gt;</programlisting>
485 </listitem>
486
487 <listitem>
488 <para><emphasis role="bold">PCI passthrough
489 (SR-IOV)</emphasis></para>
490
491 <para>KVM hypervisor support for attaching PCI devices on the host
492 system to guests. PCI passthrough allows guests to have exclusive
493 access to PCI devices for a range of tasks. PCI passthrough allows
494 PCI devices to appear and behave as if they were physically attached
495 to the guest operating system.</para>
496
497 <para>Preparing an Intel system for PCI passthrough is done like
498 so:</para>
499
500 <itemizedlist>
501 <listitem>
502 <para>Enable the Intel VT-d extensions in BIOS</para>
503 </listitem>
504
505 <listitem>
506 <para>Activate Intel VT-d in the kernel by using
507 <literal>intel_iommu=on</literal> as a kernel boot
508 parameter</para>
509 </listitem>
510
511 <listitem>
512 <para>Allow unsafe interrupts in case the system doesn't support
513 interrupt remapping. This can be done using
514 <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as
515 a boot kernel parameter.</para>
516 </listitem>
517 </itemizedlist>
518
519 <para>VFs must be created on the host before starting the
520 guest:</para>
521
522 <programlisting>$ echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
523$ modprobe vfio_pci
524$ dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
525 &lt;interface type='hostdev' managed='yes'&gt;
526 &lt;source&gt;
527 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
528 &lt;/source&gt;
529 &lt;mac address='52:54:00:6d:90:02'/&gt;
530 &lt;/interface&gt;</programlisting>
531 </listitem>
532
533 <listitem>
534 <para><emphasis role="bold">Bridge interface</emphasis></para>
535
536 <para>In case an OVS bridge exists on host, it can be used to
537 connect the guest:</para>
538
539 <programlisting> &lt;interface type='bridge'&gt;
540 &lt;mac address='52:54:00:71:b1:b6'/&gt;
541 &lt;source bridge='ovsbr0'/&gt;
542 &lt;virtualport type='openvswitch'/&gt;
543 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
544 &lt;/interface&gt;</programlisting>
545
546 <para>For further details on the network XML format, see <ulink
547 url="http://libvirt.org/formatnetwork.html">http://libvirt.org/formatnetwork.html</ulink>.</para>
548 </listitem>
549 </itemizedlist>
550 </section>
551
552 <section id="libvirt-guest-config-ex">
553 <title>Libvirt guest configuration examples</title>
554
555 <section id="guest-config-vhost-user-interface">
556 <title>Guest configuration with vhost-user interface</title>
557
558 <programlisting>&lt;domain type='kvm'&gt;
559 &lt;name&gt;vm_vhost&lt;/name&gt;
560 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
561 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
562 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
563 &lt;memoryBacking&gt;
564 &lt;hugepages&gt;
565 &lt;page size='1' unit='G' nodeset='0'/&gt;
566 &lt;/hugepages&gt;
567 &lt;/memoryBacking&gt;
568 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
569 &lt;cputune&gt;
570 &lt;shares&gt;4096&lt;/shares&gt;
571 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
572 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
573 &lt;emulatorpin cpuset='4,5'/&gt;
574 &lt;/cputune&gt;
575 &lt;os&gt;
576 &lt;type arch='x86_64' machine='pc'&gt;hvm&lt;/type&gt;
577 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
578 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
579 &lt;boot dev='hd'/&gt;
580 &lt;/os&gt;
581 &lt;features&gt;
582 &lt;acpi/&gt;
583 &lt;apic/&gt;
584 &lt;/features&gt;
585 &lt;cpu mode='host-model'&gt;
586 &lt;model fallback='allow'/&gt;
587 &lt;topology sockets='2' cores='1' threads='1'/&gt;
588 &lt;numa&gt;
589 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
590 &lt;/numa&gt;
591 &lt;/cpu&gt;
592 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
593 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
594 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
595 &lt;devices&gt;
596 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
597 &lt;disk type='file' device='disk'&gt;
598 &lt;driver name='qemu' type='raw' cache='none'/&gt;
599 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
600 &lt;target dev='vda' bus='virtio'/&gt;
601 &lt;/disk&gt;
602 &lt;interface type='vhostuser'&gt;
603 &lt;mac address='00:00:00:00:00:01'/&gt;
604 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
605 &lt;model type='virtio'/&gt;
606 &lt;driver queues='1'&gt;
607 &lt;host mrg_rxbuf='off'/&gt;
608 &lt;/driver&gt;
609 &lt;/interface&gt;
610 &lt;serial type='pty'&gt;
611 &lt;target port='0'/&gt;
612 &lt;/serial&gt;
613 &lt;console type='pty'&gt;
614 &lt;target type='serial' port='0'/&gt;
615 &lt;/console&gt;
616 &lt;/devices&gt;
617&lt;/domain&gt;</programlisting>
618 </section>
619
620 <section id="guest-config-pci-passthrough">
621 <title>Guest configuration with PCI passthrough</title>
622
623 <programlisting>&lt;domain type='kvm'&gt;
624 &lt;name&gt;vm_sriov1&lt;/name&gt;
625 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
626 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
627 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
628 &lt;memoryBacking&gt;
629 &lt;hugepages&gt;
630 &lt;page size='1' unit='G' nodeset='0'/&gt;
631 &lt;/hugepages&gt;
632 &lt;/memoryBacking&gt;
633 &lt;vcpu&gt;2&lt;/vcpu&gt;
634 &lt;os&gt;
635 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
636 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
637 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
638 &lt;boot dev='hd'/&gt;
639 &lt;/os&gt;
640 &lt;features&gt;
641 &lt;acpi/&gt;
642 &lt;apic/&gt;
643 &lt;/features&gt;
644 &lt;cpu mode='host-model'&gt;
645 &lt;model fallback='allow'/&gt;
646 &lt;topology sockets='1' cores='2' threads='1'/&gt;
647 &lt;numa&gt;
648 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
649 &lt;/numa&gt;
650 &lt;/cpu&gt;
651 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
652 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
653 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
654 &lt;devices&gt;
655 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
656 &lt;disk type='file' device='disk'&gt;
657 &lt;driver name='qemu' type='raw' cache='none'/&gt;
658 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
659 &lt;target dev='vda' bus='virtio'/&gt;
660 &lt;/disk&gt;
661 &lt;interface type='hostdev' managed='yes'&gt;
662 &lt;source&gt;
663 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
664 &lt;/source&gt;
665 &lt;mac address='52:54:00:6d:90:02'/&gt;
666 &lt;/interface&gt;
667 &lt;serial type='pty'&gt;
668 &lt;target port='0'/&gt;
669 &lt;/serial&gt;
670 &lt;console type='pty'&gt;
671 &lt;target type='serial' port='0'/&gt;
672 &lt;/console&gt;
673 &lt;/devices&gt;
674&lt;/domain&gt;</programlisting>
675 </section>
676
677 <section id="guest-config-bridge-interface">
678 <title>Guest configuration with bridge interface</title>
679
680 <programlisting>&lt;domain type='kvm'&gt;
681 &lt;name&gt;vm_bridge&lt;/name&gt;
682 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
683 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
684 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
685 &lt;memoryBacking&gt;
686 &lt;hugepages&gt;
687 &lt;page size='1' unit='G' nodeset='0'/&gt;
688 &lt;/hugepages&gt;
689 &lt;/memoryBacking&gt;
690 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
691 &lt;cputune&gt;
692 &lt;shares&gt;4096&lt;/shares&gt;
693 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
694 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
695 &lt;emulatorpin cpuset='4,5'/&gt;
696 &lt;/cputune&gt;
697 &lt;os&gt;
698 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
699 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
700 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
701 &lt;boot dev='hd'/&gt;
702 &lt;/os&gt;
703 &lt;features&gt;
704 &lt;acpi/&gt;
705 &lt;apic/&gt;
706 &lt;/features&gt;
707 &lt;cpu mode='host-model'&gt;
708 &lt;model fallback='allow'/&gt;
709 &lt;topology sockets='2' cores='1' threads='1'/&gt;
710 &lt;numa&gt;
711 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
712 &lt;/numa&gt;
713 &lt;/cpu&gt;
714 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
715 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
716 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
717 &lt;devices&gt;
718 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
719 &lt;disk type='file' device='disk'&gt;
720 &lt;driver name='qemu' type='raw' cache='none'/&gt;
721 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
722 &lt;target dev='vda' bus='virtio'/&gt;
723 &lt;/disk&gt;
724 &lt;interface type='bridge'&gt;
725 &lt;mac address='52:54:00:71:b1:b6'/&gt;
726 &lt;source bridge='ovsbr0'/&gt;
727 &lt;virtualport type='openvswitch'/&gt;
728 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
729 &lt;/interface&gt;
730 &lt;serial type='pty'&gt;
731 &lt;target port='0'/&gt;
732 &lt;/serial&gt;
733 &lt;console type='pty'&gt;
734 &lt;target type='serial' port='0'/&gt;
735 &lt;/console&gt;
736 &lt;/devices&gt;
737&lt;/domain&gt;</programlisting>
738 </section>
739 </section>
740 </section>
741</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/images/virtual_network_functions.png b/doc/book-enea-nfv-access-guide/doc/images/virtual_network_functions.png
new file mode 100644
index 0000000..4011de8
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/images/virtual_network_functions.png
Binary files differ
diff --git a/doc/book-enea-nfv-access-guide/doc/overview.xml b/doc/book-enea-nfv-access-guide/doc/overview.xml
new file mode 100644
index 0000000..d3a0d8a
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/overview.xml
@@ -0,0 +1,164 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="overview">
5 <title>Overview</title>
6
7 <para>The NFV Access Guide available with this release of Enea
8 Linux, seeks to provide further information that will help all intended
9 users make the most out of the virtualization features.</para>
10
11 <section id="description">
12 <title>NFV Access Description</title>
13
14 <para>Enea NFV Access is a lightweight virtualization software
15 designed for deployment on edge devices at customer premises. Streamlined
16 for high networking performance and minimal footprints for both and VNFs, it enables very high compute density.</para>
17
18 <para>ENFV Access also provides a foundation for vCPE agility and
19 innovation, reducing cost and complexity for computing at the network
20 edge. It supports multiple architectures and scales from small white box
21 edge devices up to high-end network servers. Thanks to the streamlined
22 footprint, Enea NFV Access can be deployed on systems as small as single
23 2-core ARM devices. It scales up to clustered 24 core x86 Xeon servers and
24 beyond, allowing multiple VNFs on the same machine, and eliminating the
25 need to use different virtualization software for different hardware
26 platforms, saving costs through single source provisioning.</para>
27
28 <para>Optimized virtual networking performance provides low virtualized
29 networking latency, high virtualized networking throughput (10 Gb wire
30 speed), and low processing overhead. It allows high compute density on
31 white box hardware, maintaining performance when moving functionality from
32 application specific appliances to software on standard hardware. The
33 optimized boot speed minimizes the time from reboot to active services,
34 improving availability.</para>
35
36 <para>Enea NFV Access provides virtualization using both containers and
37 virtual machines. Containers provide lightweight virtualization for a
38 smaller VNF footprint and a very short time interval from start to enabled
39 network services. VMs provide virtualization with secure VNF sandboxing
40 and is the preferred virtualization method for OPNFV compliance. Enea NFV
41 Access allows combinations of containers and VMs for highest possible user
42 adaptability.</para>
43
44 <para>Flexible interfaces for VNF lifecycle management and service
45 function chaining, are important to allow a smooth transition from
46 traditional network appliances to virtualized network functions in
47 existing networks, as they plug into a variety of interfaces. Enea NFV
48 Access supports VNF lifecycle management and service function chaining
49 through OpenStack, NETCONF, REST, CLI and Docker. It integrates a powerful
50 device management framework that enables full FCAPS functionality for
51 powerful management of the platform.</para>
52
53 <para>Building on open source, Enea NFV Access prevents vendor lock-in
54 thanks to its completely open standards and interfaces. Unlike proprietary
55 platforms that either do not allow decoupling of software from hardware,
56 or prevent NVF portability, Enea NFV Access includes optimized components
57 with open interfaces to allow full portability and
58 interoperability.</para>
59 </section>
60
61 <section id="components">
62 <title>NFV Access Components</title>
63
64 <para>Enea NFV Access is built on highly optimized open source and
65 value-adding components that provide standard interfaces but with boosted
66 performance.</para>
67
68 <mediaobject>
69 <imageobject>
70 <imagedata align="center"
71 fileref="images/virtual_network_functions.png" />
72 </imageobject>
73 </mediaobject>
74
75 <para>The Access Platform includes the following key components:</para>
76
77 <itemizedlist>
78 <listitem>
79 <para>Linux Kernel &ndash; Optimized Linux kernel with the focus on
80 vCPE systems characteristics.</para>
81 </listitem>
82
83 <listitem>
84 <para>KVM &ndash; Virtualization with virtual machines. KVM is the
85 standard virtualization engine for Linux based systems.</para>
86 </listitem>
87
88 <listitem>
89 <para>Docker &ndash; Docker provides a lightweight configuration using
90 containers. Docker is the standard platform for container
91 virtualization.</para>
92 </listitem>
93
94 <listitem>
95 <para>Virtual switching &ndash; Optimized OVS-DPDK provides high
96 throughput and low latency.</para>
97 </listitem>
98
99 <listitem>
100 <para>Edge Link &ndash; Edge Link provides interfaces to orchestration
101 for centralized VNF lifecycle management and service function
102 chaining:</para>
103
104 <orderedlist>
105 <listitem>
106 <para>NETCONF</para>
107 </listitem>
108
109 <listitem>
110 <para>OpenStack</para>
111 </listitem>
112
113 <listitem>
114 <para>Docker</para>
115 </listitem>
116
117 <listitem>
118 <para>REST</para>
119 </listitem>
120
121 <listitem>
122 <para>CLI</para>
123 </listitem>
124 </orderedlist>
125 </listitem>
126
127 <listitem>
128 <para>APT packet management &ndash; Feature rich repository of
129 prebuilt open source packages for extending and adapting the platform
130 using APT Package Management.</para>
131 </listitem>
132
133 <listitem>
134 <para>CLI based VNF management &ndash; CLI access over virsh and
135 libvirt.</para>
136 </listitem>
137
138 <listitem>
139 <para>FCAPS framework &ndash; The device management framework for
140 managing the platform is capable of providing full FCAPS functionality
141 to orchestration or network management systems.</para>
142 </listitem>
143
144 <listitem>
145 <para>Data plane &ndash; High performance data plane that includes the
146 following optimized data plane drivers:</para>
147
148 <orderedlist>
149 <listitem>
150 <para>DPDK</para>
151 </listitem>
152
153 <listitem>
154 <para>OpenFastPath (OFP)</para>
155 </listitem>
156
157 <listitem>
158 <para>ODP</para>
159 </listitem>
160 </orderedlist>
161 </listitem>
162 </itemizedlist>
163 </section>
164</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/ovs.xml b/doc/book-enea-nfv-access-guide/doc/ovs.xml
new file mode 100644
index 0000000..3400975
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/ovs.xml
@@ -0,0 +1,161 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="ovs">
5 <title>Open Virtual Switch</title>
6
7 <para>Open vSwitch (OVS) is an open-source multilayer virtual switch
8 designed to be used in virtualized environments to forward traffic between
9 different VMs on the same host, and also between VMs and the physical
10 network.</para>
11
12 <para>Native OVS forwarding is handled by two major components: a user-space
13 daemon called <literal>ovs-vswitchd</literal> and a
14 <literal>fastpath</literal> kernel module used to accelerate the data path.
15 The fastpath kernel module will handle packets received on the NIC by simply
16 consulting a flow table with corresponding action rules (e.g to forward the
17 packet or modify its headers). If no matching entry is found in the flow
18 table, the packet is copied to the user-space and sent to the ovs-vswitchd
19 deamon which determines how it should be handled ("slowpath").</para>
20
21 <para>The packet is then passed back to the kernel module together with the
22 desired action and the flow table is updated, so that subsequent packets in
23 the same flow can be handled in fastpath without any user-space interaction.
24 In this way, OVS eliminates a lot of the context switching between
25 kernel-space and user-space, but the throughput is still limited by the
26 capacity of the Linux kernel stack.</para>
27
28 <section id="ovs-dpdk">
29 <title>OVS-DPDK</title>
30
31 <para>To improve performance, OVS supports integration with Intel DPDK
32 libraries to operate entirely in user-space (OVS-DPDK). DPDK Poll Mode
33 Drivers (PMDs) enable direct transfers of packets between the physical NIC
34 and user-space, thereby eliminating the overhead of interrupt handling and
35 Linux kernel network stack processing. OVS-DPDK provides DPDK-backed
36 vhost-user ports as the primary way to connect guests to this datapath.
37 The vhost-user interfaces are transparent to the guest.</para>
38 </section>
39
40 <section id="ovs-commands">
41 <title>OVS commands</title>
42
43 <para>OVS provides a rich set of command line management tools, most
44 importantly:</para>
45
46 <itemizedlist>
47 <listitem>
48 <para>ovs-vsctl: Used to manage and inspect switch configurations,
49 e.g. to create bridges and to add/remove ports.</para>
50 </listitem>
51
52 <listitem>
53 <para>ovs-ofctl: Used to configure and monitor flows.</para>
54 </listitem>
55 </itemizedlist>
56
57 <para>For more information about Open vSwitch, see <ulink
58 url="http://openvswitch.org">http://openvswitch.org</ulink>.</para>
59 </section>
60
61 <section id="config-ovs-dpdk">
62 <title>Configuring OVS-DPDK for improved performance</title>
63
64 <section id="dpdk-lcore-mask">
65 <title>dpdk-lcore-mask</title>
66
67 <para>Specifies the CPU core affinity for DPDK lcore threads. The lcore
68 threads are used for DPDK library tasks. For performance it is best to
69 set this to a single core on the system, and it should not overlap the
70 pmd-cpu-mask, as seen in the example below.</para>
71
72 <para>Example: To use core 1:</para>
73
74 <programlisting>ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1</programlisting>
75 </section>
76
77 <section id="pmd-cpu-mask">
78 <title>pmd-cpu-mask</title>
79
80 <para>The DPDK PMD threads polling for incoming packets are CPU bound
81 and should be pinned to isolated cores for optimal performance.</para>
82
83 <para>If OVS-DPDK receives traffic on multiple ports, for example when
84 DPDK and vhost-user ports are used for bi-directional traffic, the
85 performance can be significantly improved by creating multiple PMD
86 threads and affinitizing them to separate cores in order to share the
87 workload, by each being responsible for an individual port. The cores
88 should not be hyperthreads on the same CPU.</para>
89
90 <para>The PMD core affinity is specified by setting an appropriate core
91 mask. Example: using cores 2 and 3:</para>
92
93 <programlisting>ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc</programlisting>
94 </section>
95 </section>
96
97 <section id="setup-ovs-dpdk">
98 <title>How to set up OVS-DPDK</title>
99
100 <para>The DPDK must be configured prior to setting up OVS-DPDK. See
101 [FIXME] for DPDK setup instructions, then follow these steps:</para>
102
103 <orderedlist>
104 <listitem>
105 <para>Clean up the environment:</para>
106
107 <programlisting>killall ovsdb-server ovs-vswitchd
108rm -f /var/run/openvswitch/vhost-user*
109rm -f /etc/openvswitch/conf.db</programlisting>
110 </listitem>
111
112 <listitem>
113 <para>Start the ovsdb-server:</para>
114
115 <programlisting>export DB_SOCK=/var/run/openvswitch/db.sock
116ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
117ovsdb-server --remote=punix:$DB_SOCK /
118--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach</programlisting>
119 </listitem>
120
121 <listitem>
122 <para>Start ovs-vswitchd with DPDK support enabled:</para>
123
124 <programlisting>ovs-vsctl --no-wait init
125ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1
126ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc
127ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
128ovs-vswitchd unix:$DB_SOCK --pidfile --detach /
129--log-file=/var/log/openvswitch/ovs-vswitchd.log</programlisting>
130 </listitem>
131
132 <listitem>
133 <para>Create the OVS bridge and attach ports:</para>
134
135 <programlisting>ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
136ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk /
137:dpdk-devargs=&lt;PCI device&gt;</programlisting>
138 </listitem>
139
140 <listitem>
141 <para>Add DPDK vhost-user ports:</para>
142
143 <programlisting>ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser</programlisting>
144
145 <para>This command creates a socket at
146 <literal>/var/run/openvswitch/vhost-user1</literal>, which can be
147 provided to the VM on the QEMU command line. See [FIXME] for
148 details.</para>
149 </listitem>
150
151 <listitem>
152 <para>Define flows:</para>
153
154 <programlisting>ovs-ofctl del-flows ovsbr0
155ovs-ofctl show ovsbr0
156ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
157ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>
158 </listitem>
159 </orderedlist>
160 </section>
161</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml b/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml
new file mode 100644
index 0000000..18602a1
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml
@@ -0,0 +1,203 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="workflow">
5 <title>Using NFV Access SDKs</title>
6
7 <para>Enea NFV Access comes with two different toolchains, one for
8 developing applications for the host and one for applications running in the
9 guest VM. Each is wrapped together with an environment-setup script into a
10 shell archive and is available under the Download section on
11 portal.enea.com. They have self explanatory names.</para>
12
13 <itemizedlist>
14 <listitem>
15 <para><literal>inteld1521/sdk/enea-glibc-x86_64-enea-image-virtualization-host-sdk-corei7-64-toolchain-7.0.sh</literal>
16 - for host applications.</para>
17 </listitem>
18
19 <listitem>
20 <para><literal>qemux86-64/sdk/enea-glibc-x86_64-enea-image-virtualization-guest-sdk-core2-64-toolchain-7.0.sh</literal>
21 - for guest applications.</para>
22 </listitem>
23 </itemizedlist>
24
25 <section id="install-crosscomp">
26 <title>Installing the Cross-Compilation Toolchain</title>
27
28 <para>Before cross-compiling applications for your target, you need to
29 install the corresponding toolchain on your workstation. To do that,
30 simply run the installer and follow the steps included with it:</para>
31
32 <orderedlist>
33 <listitem>
34 <para><programlisting>$ ./enea-glibc-x86_64-enea-image-virtualization-guest-sdk-core2-64-toolchain-7.0.sh</programlisting>When
35 prompted, select to install the toolchain in the desired directory,
36 referred to as <literal>&lt;sdkdir&gt;</literal>. </para>
37
38 <para>A default path where the toolchain will be installed will be
39 shown in the prompt. The installer unpacks the environment setup
40 script in <literal>&lt;sdkdir&gt;</literal> and the toolchain under
41 <literal>&lt;sdkdir&gt;/sysroots</literal>.</para>
42
43 <note>
44 <para>Choose a unique directory for each toolchain. Installing a
45 second toolchain of any type in the same directory as a previously
46 installed one will break the <literal>$PATH</literal> variable of
47 the first one.</para>
48 </note>
49 </listitem>
50
51 <listitem>
52 <para>Setup the toolchain environment for your target by sourcing the
53 environment-setup script. Example: <programlisting>$ source &lt;sdkdir&gt;/environment-setup-core2-64-enea-linux</programlisting></para>
54 </listitem>
55 </orderedlist>
56 </section>
57
58 <section id="crosscomp-apps">
59 <title>Cross-Compiling Applications from Command Line</title>
60
61 <para>Once the environment-setup script is sourced, you can make your
62 applications as per usual and get them compiled for your target. Below you
63 see how to cross-compile from command line.</para>
64
65 <orderedlist>
66 <listitem>
67 <para>Create a Makefile for your application. Example: a simple
68 Makefile and application:</para>
69
70 <programlisting>helloworld:helloworld.o
71 $(CC) -o helloworld helloworld.o
72clean:
73 rm -f *.o helloworld
74#include &lt;stdio.h&gt;
75int main(void) {
76 printf("Hello World\n");
77 return 0;
78}</programlisting>
79 </listitem>
80
81 <listitem>
82 <para>Run <command>make</command> to cross-compile your application
83 according to the environment set up:</para>
84
85 <programlisting>$ make</programlisting>
86 </listitem>
87
88 <listitem>
89 <para>Deploy the helloworld program to your target and run it:</para>
90
91 <programlisting># ./helloworld
92hello world</programlisting>
93 </listitem>
94 </orderedlist>
95 </section>
96
97 <section id="crosscomp-kern-mod">
98 <title>Cross-Compiling Kernel Modules</title>
99
100 <para>Before cross-compiling kernle modules, you need to make sure the
101 installed toolchain includes the kernel source tree, which should be
102 available at:
103 <literal>&lt;sdkdir&gt;/sysroots/&lt;targetarch&gt;-enea-linux/usr/src/kernel</literal>.</para>
104
105 <para>Once the environment-setup script is sourced, you can make your
106 kernel modules as usual and get them compiled for your target. Below you
107 see how to cross-compile a kernel module.</para>
108
109 <orderedlist>
110 <listitem>
111 <para>Create a Makefile for the kernel module. Example: a simple
112 Makefile and kernel module:</para>
113
114 <programlisting>obj-m := hello.o
115PWD := $(shell pwd)
116
117KERNEL_SRC := &lt;full path to kernel source tree&gt;
118
119all: scripts
120 $(MAKE) -C $(KERNEL_SRC) M=$(PWD) LDFLAGS="" modules
121scripts:
122 $(MAKE) -C $(KERNEL_SRC) scripts
123clean:
124 $(MAKE) -C $(KERNEL_SRC) M=$(PWD) LDFLAGS="" clean
125#include &lt;linux/module.h&gt; /* Needed by all modules */
126#include &lt;linux/kernel.h&gt; /* Needed for KERN_INFO */
127#include &lt;linux/init.h&gt; /* Needed for the macros */
128
129static int __init hello_start(void)
130{
131 printk(KERN_INFO "Loading hello module...\n");
132 printk(KERN_INFO "Hello, world\n");
133 return 0;
134}
135
136static void __exit hello_end(void)
137{
138 printk(KERN_INFO "Goodbye, world\n");
139}
140
141module_init(hello_start);
142module_exit(hello_end);
143
144MODULE_LICENSE("GPL");</programlisting>
145 </listitem>
146
147 <listitem>
148 <para>Run <command>make</command> to cross-compile your kernel module
149 according to the environment set up:</para>
150
151 <programlisting>$ make</programlisting>
152 </listitem>
153
154 <listitem>
155 <para>Deploy the kernel module <literal>hello.ko</literal> to your
156 target and install/remove it:</para>
157
158 <programlisting># insmod hello.ko
159# rmmod hello.ko
160# dmesg
161[...] Loading hello module...
162[...] Hello, world
163[...] Goodbye, world</programlisting>
164 </listitem>
165 </orderedlist>
166 </section>
167
168 <section id="deploy-artifacts">
169 <title>Deploying your artifacts</title>
170
171 <orderedlist>
172 <listitem>
173 <para>Deploying on host</para>
174
175 <para>You can use <literal>ssh</literal> to deploy your artifacts on
176 the host target. For this you will need a network connection to the
177 target and to use <literal>scp</literal> to copy to the desired
178 location.</para>
179 </listitem>
180
181 <listitem>
182 <para>Deploying on guest</para>
183
184 <para>You can deploy your artifacts onto the guest VM running on the
185 target in two steps:</para>
186
187 <itemizedlist>
188 <listitem>
189 <para>Deploy the artifacts onto the target by using the method
190 described above or any other method.</para>
191 </listitem>
192
193 <listitem>
194 <para>On the target, copy the artifacts to the guest rootfs. For
195 this, you will need to shut down the guest VM, mount the file
196 system on the target, copy your files onto it, unmount it and then
197 restart the guest VM as usual.</para>
198 </listitem>
199 </itemizedlist>
200 </listitem>
201 </orderedlist>
202 </section>
203</chapter> \ No newline at end of file