summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMiruna Paun <Miruna.Paun@enea.com>2017-06-30 17:35:58 +0200
committerMiruna Paun <Miruna.Paun@enea.com>2017-06-30 17:35:58 +0200
commiteeb28fcfbb693e00df2f5d1fd100dcbb548179fc (patch)
tree81c2fbdf2f6dbb5655dfa51c92919afb996de1af
parent34f088b7bf0ab2d368a4c9c0d238183140c7f42b (diff)
downloadel_releases-virtualization-eeb28fcfbb693e00df2f5d1fd100dcbb548179fc.tar.gz
Updated remaing content save for benchmarks
LXCR-7844 Signed-off-by: Miruna Paun <Miruna.Paun@enea.com>
-rw-r--r--doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml2
-rw-r--r--doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml136
-rw-r--r--doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml214
-rw-r--r--doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml413
4 files changed, 759 insertions, 6 deletions
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
index 0db4fa4..5d6e268 100644
--- a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
+++ b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml
@@ -1,7 +1,7 @@
1<?xml version="1.0" encoding="ISO-8859-1"?> 1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" 2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> 3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="workflow"> 4<chapter condition="hidden" id="benchmarks">
5 <title>Benchmarks</title> 5 <title>Benchmarks</title>
6 6
7 <para></para> 7 <para></para>
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml b/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml
index 6f74061..c6ce223 100644
--- a/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml
+++ b/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml
@@ -1,8 +1,138 @@
1<?xml version="1.0" encoding="UTF-8"?> 1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" 2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> 3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter condition="hidden" id="workflow"> 4<chapter id="container-virtualization">
5 <title>Container Virtualization</title> 5 <title>Container Virtualization</title>
6 6
7 <para></para> 7 <section id="docker">
8 <title>Docker</title>
9
10 <para>Docker is an open-source project that automates the deployment of
11 applications inside software containers, by providing an additional layer
12 of abstraction and automation of operating-system-level virtualization on
13 Linux.</para>
14
15 <para>The software container mechanism uses resource isolation features
16 inside the Linux kernel, such as cgroups and kernel namespaces to allow
17 multiple containers to run within a single Linux instance, avoiding the
18 overhead of starting and maintaining virtual machines. </para>
19
20 <para>Containers are lightweight and include everything needed to run
21 themselves: code, runtime, system tools, system libraries and settings.
22 The main advantage provided by containers is that the encapsulated
23 software is isolated from its surroundings. For example, differences
24 between development and staging environments can be kept separate in order
25 to reduce conflicts between teams running different software on the same
26 infrastructure. </para>
27
28 <para>For a better understanding of what Docker is and how it works, the
29 official documentation provided on the Docker website should be consulted:
30 <ulink
31 url="https://docs.docker.com/">https://docs.docker.com/</ulink>.</para>
32
33 <section id="launch-docker-container">
34 <title>Launching a Docker container</title>
35
36 <para>Docker provides a hello-world container which checks whether your
37 system is running the daemon correctly. This container can be launched
38 by simply running:</para>
39
40 <programlisting>&gt;docker run hello-world
41
42Hello from Docker!</programlisting>
43
44 <para>This message shows that your installation appears to be working
45 correctly.</para>
46 </section>
47
48 <section id="run-enfv-guest-image">
49 <title>Run an Enea NFV Access Platform guest image</title>
50
51 <para>Enea NFV Access Platform guest images can run inside Docker as any
52 other container can. Before starting an Enea NFV Access Platform guest
53 image, a root filesystem has to be imported in Docker:</para>
54
55 <programlisting>&gt;docker import enea-linux-virtualization-guest-x86-64.tar.gz el7guest</programlisting>
56
57 <para>To check that the Docker image has been imported successfully,
58 run:</para>
59
60 <programlisting>&gt;docker images</programlisting>
61
62 <para>Finally, start an Enea NFV Access Platform container with
63 <literal>bash</literal> running as the shell, by running:</para>
64
65 <programlisting>&gt;docker run -it el7guest /bin/bash</programlisting>
66 </section>
67
68 <section id="attach-ext-resources-docker-containers">
69 <title>Attach external resources to Docker containers</title>
70
71 <para>Any system resource present on the host machine can be attached or
72 accessed by a Docker container.</para>
73
74 <para>Typically, if a file or folder on the host machine needs to be
75 attached to a container, that container should be launched with the
76 <literal>-v</literal> parameter. For example, to attach the
77 <literal>roots</literal> home folder to a container, the command line
78 for Docker should have the following format:</para>
79
80 <programlisting>&gt;docker run -it -v /home/root:/home/host_root/ el7guest /bin/bash</programlisting>
81
82 <para>To check that folders have been properly passed from the host to
83 the container, create a file in the source folder on the host root
84 filesystem and check for its existence inside the containers destination
85 location.</para>
86
87 <section id="attach-vhost-descriptors">
88 <title>Attach vhost file descriptors</title>
89
90 <para>If OVS is running on the host and vhost file descriptors need to
91 be passed to the container, this can be done by either mapping the
92 folder where all the file descriptors are located or mapping the file
93 descriptor itself:</para>
94
95 <itemizedlist>
96 <listitem>
97 <para>Mapping the folder can be done as exemplified above:</para>
98
99 <programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ el7guest /bin/bash</programlisting>
100 </listitem>
101
102 <listitem>
103 <para>Mapping a file descriptor is done in a similar way, but the
104 <literal>-v</literal> flag needs to point directly to it:</para>
105
106 <programlisting>&gt;docker run -it --rm -v /var/run/openvswitch/vhost-user1 el7guest /bin/bash</programlisting>
107 </listitem>
108 </itemizedlist>
109 </section>
110
111 <section id="attach-hugepages-mount-folders">
112 <title>Attach hugepages mount folders</title>
113
114 <para>Hugepages mount folders can also be accessed by a container
115 similarly to how a plain folder is mapped, as shown in 1.3.</para>
116
117 <para>For example, if the host system has hugepages mounted in the
118 <literal>/mnt/huge</literal> location, a container can also access
119 hugepages by being launched with:</para>
120
121 <programlisting>&gt;docker run -it -v /mnt/huge el7guest /bin/bash</programlisting>
122 </section>
123
124 <section id="access-pci-bus">
125 <title>Access the PCI bus</title>
126
127 <para>If the host machine has multiple SRIOV instances created, a
128 container can access the instances by being given privileged access to
129 the host system. Unlike folders, PCI devices do not have to be mounted
130 explicitly in order to be accessed and will be available to the
131 container if the <literal>--privileged</literal> flag is passed to the
132 command line:</para>
133
134 <programlisting>&gt;docker run --privileged -it el7guest /bin/bash</programlisting>
135 </section>
136 </section>
137 </section>
8</chapter> \ No newline at end of file 138</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml b/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml
index b534e20..aa0dcb5 100644
--- a/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml
+++ b/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml
@@ -120,8 +120,218 @@
120 </section> 120 </section>
121 121
122 <section id="prebuilt-artifacts"> 122 <section id="prebuilt-artifacts">
123 <title>How to use Prebuilt Artifacts</title> 123 <title>How to use the Prebuilt Artifacts</title>
124 124
125 <para></para> 125 <section id="boot-ramdisk">
126 <title>Booting Enea NFV Access Platform using RAMDISK</title>
127
128 <para>There may be use cases, especially at first target ramp-up, where
129 the HDD/SDD has no partitions and you need to prepare the disks for
130 boot. Booting from ramdisk can help with this task.</para>
131
132 <para>The prerequisites needed to proceed:</para>
133
134 <itemizedlist>
135 <listitem>
136 <para>Enea Linux ext4 rootfs image -
137 enea-image-virtualization-host-inteld1521.ext4</para>
138 </listitem>
139
140 <listitem>
141 <para>Enea Linux kernel image - bzImage</para>
142 </listitem>
143
144 <listitem>
145 <para>BIOS has PXE boot enabled</para>
146 </listitem>
147
148 <listitem>
149 <para>PXE/tftp server configured and connected (ethernet) to
150 target.</para>
151 </listitem>
152 </itemizedlist>
153
154 <para>Copy bzImage and enea-image-virtualization-host-inteld1521.ext4.gz
155 images to the tftpserver configured for PXE boot.</para>
156
157 <para>Use the following as an example for the PXE configuration
158 file:</para>
159
160 <programlisting>default vesamenu.c32
161prompt 1
162timeout 0
163
164label el_ramfs
165 menu label ^EneaLinux_RAMfs
166 kernel bzImage
167 append root=/dev/ram0 initrd=enea-image-virtualization-host-inteld1521.ext4 /
168 ramdisk_size=1200000 console=ttyS0,115200 eralyprintk=ttyS0,115200</programlisting>
169
170 <para>Restart the target. Then enter (F11) in the Boot Menu and select
171 the Ethernet interface used for PXE boot. From the PXE Boot Menu select
172 <emphasis role="bold">Enea Linux_RAMfs</emphasis>. Once the Enea NFV
173 Access Platform is started you can partition the HDD/SDD and install
174 GRUB as described in in the following section.s</para>
175 </section>
176
177 <section id="install-grub">
178 <title>Partitioning a new harddisk and installing GRUB</title>
179
180 <para>The prerequisites needed:</para>
181
182 <itemizedlist>
183 <listitem>
184 <para>grub (<literal>grub-efi-bootx64.efi</literal>) - availalble as
185 a pre-built artifact under
186 <literal>inteld1521/images/enea-image-virtualization-host</literal>.</para>
187 </listitem>
188
189 <listitem>
190 <para><literal>e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb,
191 dosfstools_4.1-r0.0_amd64.deb</literal> - available under
192 <literal>inteld1521/deb</literal>.</para>
193 </listitem>
194 </itemizedlist>
195
196 <para>Proceed using the following steps:</para>
197
198 <orderedlist>
199 <listitem>
200 <para>Boot target with Enea NFV Access Platform from RAMDISK</para>
201 </listitem>
202
203 <listitem>
204 <para>Install prerequisite packages:</para>
205
206 <programlisting>&gt; dpkg -i e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb
207&gt; dpkg -i dosfstools_4.1-r0.0_amd64.deb</programlisting>
208 </listitem>
209
210 <listitem>
211 <para>Partition the disk:</para>
212
213 <programlisting>&gt; fdisk /dev/sda
214fdisk&gt; g {GPT partition type}
215fdisk&gt; n
216fdisk&gt; 1
217fdisk&gt; {default start part}
218fdisk&gt; +512M
219fdisk&gt; t
220fdisk&gt; 1 {ESP/EFI partition}
221fdisk&gt; n
222fdisk&gt; 2
223fdisk&gt; {default start part}
224fdisk&gt; +18G
225fdisk&gt; 3
226fdisk&gt; {default start part}
227fdisk&gt; +20G
228...
229fdisk&gt; 7
230fdisk&gt; {default start part}
231fdisk&gt; {default end end part}
232
233fdisk&gt; p {print partion table}
234fdisk&gt; w {write to disk}
235fdisk&gt; q</programlisting>
236 </listitem>
237
238 <listitem>
239 <para>Format the partitions:</para>
240
241 <programlisting>&gt; mkfs.fat -F32 -nEFI /dev/sda1
242&gt; mkfs.ext4 -LROOT /dev/sda2
243&gt; mkfs.ext4 -LROOT /dev/sda3
244&gt; mkfs.ext4 -LROOT /dev/sda4
245&gt; mkfs.ext4 -LROOT /dev/sda5
246&gt; mkfs.ext4 -LROOT /dev/sda6
247&gt; mkfs.ext4 -LROOT /dev/sda7</programlisting>
248 </listitem>
249
250 <listitem>
251 <para>Create a GRUB partition:</para>
252
253 <programlisting>&gt; mkdir /mnt/boot
254&gt; mount /dev/sda1 /mnt/boot
255&gt; mkdir -p /mnt/boot/EFI/boot
256
257&gt; cp grub-efi-bootx64.efi /mnt/boot/EFI/boot/bootx64.efi
258&gt; vi /mnt/boot/EFI/boot/grub.cfg
259default=1
260
261menuentry "Linux Reference Image" {
262 linux (hd0,gpt2)/boot/bzImage root=/dev/sda2 ip=dhcp
263}
264
265menuentry "Linux sda3" {
266 linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp
267}
268
269menuentry "Linux sda4" {
270 linux (hd0,gpt4)/boot/bzImage root=/dev/sda4 ip=dhcp
271}
272
273menuentry "Linux sda5" {
274 linux (hd0,gpt5)/boot/bzImage root=/dev/sda5 ip=dhcp
275}
276
277menuentry "Linux sda6" {
278 linux (hd0,gpt6)/boot/bzImage root=/dev/sda6 ip=dhcp
279}
280
281menuentry "Linux sda7" {
282 linux (hd0,gpt7)/boot/bzImage root=/dev/sda7 ip=dhcp
283}</programlisting>
284 </listitem>
285 </orderedlist>
286 </section>
287
288 <section id="boot-hdd">
289 <title>Installing and booting Enea NFV Access Platform on the
290 harddisk</title>
291
292 <para>After partitioning the harddisk, boot Enea NFV Access Platform
293 from RAMFS or from a reference image installed on one of the
294 partitions.</para>
295
296 <para>To install Enea Linux image on a partition follow these
297 steps:</para>
298
299 <orderedlist>
300 <listitem>
301 <para>Copy your platform image on target:</para>
302
303 <programlisting>server&gt; scp ./enea-image-virtualization-host-inteld1521.tar.gz /
304root@&lt;target_ip&gt;:/home/root/</programlisting>
305 </listitem>
306
307 <listitem>
308 <para>Extract image onto the desired partition:</para>
309
310 <programlisting>target&gt; mount /dev/sda3 /mnt/sda
311target&gt; tar -pzxf /home/root/enea-image-virtualization-host-inteld1521.tar.gz /
312-C /mnt/sda</programlisting>
313
314 <para>Alternately, you can do both steps in one command from the
315 server:</para>
316
317 <programlisting>server&gt; cat ./enea-image-virtualization-host-inteld1521.tar.gz | /
318ssh root@&lt;target_ip&gt; "cd /mnt/sda6; tar -zxf -"</programlisting>
319 </listitem>
320
321 <listitem>
322 <para>Reboot</para>
323 </listitem>
324
325 <listitem>
326 <para>From the GRUB menu select your partition</para>
327 </listitem>
328 </orderedlist>
329
330 <note>
331 <para>In order to change kernel boot parameters you need to mount the
332 GRUB partition (i.e. <literal>/dev/sda1</literal>) and change the
333 <literal>EFI/boot/grub.cfg</literal> file.</para>
334 </note>
335 </section>
126 </section> 336 </section>
127</chapter> \ No newline at end of file 337</chapter> \ No newline at end of file
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml b/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml
index 092b52f..6242de4 100644
--- a/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml
+++ b/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml
@@ -324,5 +324,418 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting>
324 </itemizedlist> 324 </itemizedlist>
325 </listitem> 325 </listitem>
326 </itemizedlist> 326 </itemizedlist>
327
328 <section id="boot-kvm-guest">
329 <title>Booting a KVM Guest</title>
330
331 <para>There are several ways to boot a KVM guest. Here we describe how
332 to boot using a raw image. A direct kernel boot can be performed by
333 transferring the guest kernel and the file system files to the host and
334 specifying a <literal>&lt;kernel&gt;</literal> and an
335 <literal>&lt;initrd&gt;</literal> element inside the
336 <literal>&lt;os&gt;</literal> element of the guest XML file, as in the
337 following example:</para>
338
339 <programlisting>&lt;os&gt;
340 &lt;kernel&gt;bzImage&lt;/kernel&gt;
341&lt;/os&gt;
342&lt;devices&gt;
343 &lt;disk type='file' device='disk'&gt;
344 &lt;driver name='qemu' type='raw' cache='none'/&gt;
345 &lt;source file='enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
346 &lt;target dev='vda' bus='virtio'/&gt;
347 &lt;/disk&gt;
348&lt;/devices&gt;</programlisting>
349 </section>
350
351 <section id="start-guest">
352 <title>Starting a Guest</title>
353
354 <para>Command <command>virsh create</command> starts a guest:</para>
355
356 <programlisting>virsh create example-guest-x86.xml</programlisting>
357
358 <para>If further configurations are needed before the guest is reachable
359 through <literal>ssh</literal>, a console can be started using command
360 <command>virsh console</command>. The example below shows how to start a
361 console where kvm-example-guest is the name of the guest defined in the
362 guest XML file:</para>
363
364 <programlisting>virsh console kvm-example-guest</programlisting>
365
366 <para>This requires that the guest domain has a console configured in
367 the guest XML file:</para>
368
369 <programlisting>&lt;os&gt;
370 &lt;cmdline&gt;console=ttyS0,115200&lt;/cmdline&gt;
371&lt;/os&gt;
372&lt;devices&gt;
373 &lt;console type='pty'&gt;
374 &lt;target type='serial' port='0'/&gt;
375 &lt;/console&gt;
376&lt;/devices&gt;</programlisting>
377 </section>
378
379 <section id="isolation">
380 <title>Isolation</title>
381
382 <para>It may be desirable to isolate execution in a guest, to a specific
383 guest core. It might also be desirable to run a guest on a specific host
384 core.</para>
385
386 <para>To pin the virtual CPUs of the guest to specific cores, configure
387 the <literal>&lt;cputune&gt;</literal> contents as follows:</para>
388
389 <orderedlist>
390 <listitem>
391 <para>First explicitly state on which host core each guest core
392 shall run, by mapping <literal>vcpu</literal> to
393 <literal>cpuset</literal> in the <literal>&lt;vcpupin&gt;</literal>
394 tag.</para>
395 </listitem>
396
397 <listitem>
398 <para>In the <literal>&lt;cputune&gt;</literal> tag it is further
399 possible to specify on which CPU the emulator shall run by adding
400 the cpuset to the <literal>&lt;emulatorpin&gt;</literal> tag.</para>
401
402 <programlisting>&lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
403&lt;cputune&gt;
404 &lt;vcpupin vcpu='0' cpuset='2'/&gt;
405 &lt;vcpupin vcpu='1' cpuset='3'/&gt;
406 &lt;emulatorpin cpuset="2"/&gt;
407&lt;/cputune&gt;</programlisting>
408
409 <para><literal>libvirt</literal> will group all threads belonging to
410 a qemu instance into cgroups that will be created for that purpose.
411 It is possible to supply a base name for those cgroups using the
412 <literal>&lt;resource&gt;</literal> tag:</para>
413
414 <programlisting>&lt;resource&gt;
415 &lt;partition&gt;/rt&lt;/partition&gt;
416&lt;/resource&gt;</programlisting>
417 </listitem>
418 </orderedlist>
419 </section>
420
421 <section id="network-libvirt">
422 <title>Networking using libvirt</title>
423
424 <para>Command <command>virsh net-create</command> starts a network. If
425 any networks are listed in the guest XML file, those networks must be
426 started before the guest is started. As an example, if the network is
427 defined in a file named example-net.xml, it is started as
428 follows:</para>
429
430 <programlisting>virsh net-create example-net.xml
431&lt;network&gt;
432 &lt;name&gt;sriov&lt;/name&gt;
433 &lt;forward mode='hostdev' managed='yes'&gt;
434 &lt;pf dev='eno3'/&gt;
435 &lt;/forward&gt;
436&lt;/network&gt;</programlisting>
437
438 <para><literal>libvirt</literal> is a virtualization API that supports
439 virtual network creation. These networks can be connected to guests and
440 containers by referencing the network in the guest XML file. It is
441 possible to have a virtual network persistently running on the host by
442 starting the network with command <command>virsh net-define</command>
443 instead of the previously mentioned <command>virsh net-create</command>.
444 </para>
445
446 <para>An example for the sample network defined in
447 <literal>meta-vt/recipes-example/virt-example/files/example-net.xml</literal>:</para>
448
449 <programlisting>virsh net-define example-net.xml</programlisting>
450
451 <para>Command <command>virsh net-autostart</command> enables a
452 persistent network to start automatically when the libvirt daemon
453 starts:</para>
454
455 <programlisting>virsh net-autostart example-net</programlisting>
456
457 <para>Guest configuration file (xml) must be updated to access newly
458 created network like so:</para>
459
460 <programlisting> &lt;interface type='network'&gt;
461 &lt;source network='sriov'/&gt;
462 &lt;/interface&gt;</programlisting>
463
464 <para>The following presented here are a few modes of network access
465 from guest using virsh:</para>
466
467 <itemizedlist>
468 <listitem>
469 <para><emphasis role="bold">vhost-user interface</emphasis></para>
470
471 <para>See Openvswitch chapter on how to create vhost-user interface
472 using OpenVSwitch. Currently there is no Open vSwitch support for
473 networks that are managed by libvirt (e.g. NAT). As of now, only
474 bridged networks are supported (those where the user has to manually
475 create the bridge).</para>
476
477 <programlisting> &lt;interface type='vhostuser'&gt;
478 &lt;mac address='00:00:00:00:00:01'/&gt;
479 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
480 &lt;model type='virtio'/&gt;
481 &lt;driver queues='1'&gt;
482 &lt;host mrg_rxbuf='off'/&gt;
483 &lt;/driver&gt;
484 &lt;/interface&gt;</programlisting>
485 </listitem>
486
487 <listitem>
488 <para><emphasis role="bold">PCI passthrough
489 (SR-IOV)</emphasis></para>
490
491 <para>KVM hypervisor support for attaching PCI devices on the host
492 system to guests. PCI passthrough allows guests to have exclusive
493 access to PCI devices for a range of tasks. PCI passthrough allows
494 PCI devices to appear and behave as if they were physically attached
495 to the guest operating system.</para>
496
497 <para>Preparing an Intel system for PCI passthrough is done like
498 so:</para>
499
500 <itemizedlist>
501 <listitem>
502 <para>Enable the Intel VT-d extensions in BIOS</para>
503 </listitem>
504
505 <listitem>
506 <para>Activate Intel VT-d in the kernel by using
507 <literal>intel_iommu=on</literal> as a kernel boot
508 parameter</para>
509 </listitem>
510
511 <listitem>
512 <para>Allow unsafe interrupts in case the system doesn't support
513 interrupt remapping. This can be done using
514 <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as
515 a boot kernel parameter.</para>
516 </listitem>
517 </itemizedlist>
518
519 <para>VFs must be created on the host before starting the
520 guest:</para>
521
522 <programlisting>$ echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
523$ modprobe vfio_pci
524$ dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
525 &lt;interface type='hostdev' managed='yes'&gt;
526 &lt;source&gt;
527 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
528 &lt;/source&gt;
529 &lt;mac address='52:54:00:6d:90:02'/&gt;
530 &lt;/interface&gt;</programlisting>
531 </listitem>
532
533 <listitem>
534 <para><emphasis role="bold">Bridge interface</emphasis></para>
535
536 <para>In case an OVS bridge exists on host, it can be used to
537 connect the guest:</para>
538
539 <programlisting> &lt;interface type='bridge'&gt;
540 &lt;mac address='52:54:00:71:b1:b6'/&gt;
541 &lt;source bridge='ovsbr0'/&gt;
542 &lt;virtualport type='openvswitch'/&gt;
543 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
544 &lt;/interface&gt;</programlisting>
545
546 <para>For further details on the network XML format, see <ulink
547 url="http://libvirt.org/formatnetwork.html">http://libvirt.org/formatnetwork.html</ulink>.</para>
548 </listitem>
549 </itemizedlist>
550 </section>
551
552 <section id="libvirt-guest-config-ex">
553 <title>Libvirt guest configuration examples</title>
554
555 <section id="guest-config-vhost-user-interface">
556 <title>Guest configuration with vhost-user interface</title>
557
558 <programlisting>&lt;domain type='kvm'&gt;
559 &lt;name&gt;vm_vhost&lt;/name&gt;
560 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
561 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
562 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
563 &lt;memoryBacking&gt;
564 &lt;hugepages&gt;
565 &lt;page size='1' unit='G' nodeset='0'/&gt;
566 &lt;/hugepages&gt;
567 &lt;/memoryBacking&gt;
568 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
569 &lt;cputune&gt;
570 &lt;shares&gt;4096&lt;/shares&gt;
571 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
572 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
573 &lt;emulatorpin cpuset='4,5'/&gt;
574 &lt;/cputune&gt;
575 &lt;os&gt;
576 &lt;type arch='x86_64' machine='pc'&gt;hvm&lt;/type&gt;
577 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
578 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
579 &lt;boot dev='hd'/&gt;
580 &lt;/os&gt;
581 &lt;features&gt;
582 &lt;acpi/&gt;
583 &lt;apic/&gt;
584 &lt;/features&gt;
585 &lt;cpu mode='host-model'&gt;
586 &lt;model fallback='allow'/&gt;
587 &lt;topology sockets='2' cores='1' threads='1'/&gt;
588 &lt;numa&gt;
589 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
590 &lt;/numa&gt;
591 &lt;/cpu&gt;
592 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
593 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
594 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
595 &lt;devices&gt;
596 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
597 &lt;disk type='file' device='disk'&gt;
598 &lt;driver name='qemu' type='raw' cache='none'/&gt;
599 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
600 &lt;target dev='vda' bus='virtio'/&gt;
601 &lt;/disk&gt;
602 &lt;interface type='vhostuser'&gt;
603 &lt;mac address='00:00:00:00:00:01'/&gt;
604 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
605 &lt;model type='virtio'/&gt;
606 &lt;driver queues='1'&gt;
607 &lt;host mrg_rxbuf='off'/&gt;
608 &lt;/driver&gt;
609 &lt;/interface&gt;
610 &lt;serial type='pty'&gt;
611 &lt;target port='0'/&gt;
612 &lt;/serial&gt;
613 &lt;console type='pty'&gt;
614 &lt;target type='serial' port='0'/&gt;
615 &lt;/console&gt;
616 &lt;/devices&gt;
617&lt;/domain&gt;</programlisting>
618 </section>
619
620 <section id="guest-config-pci-passthrough">
621 <title>Guest configuration with PCI passthrough</title>
622
623 <programlisting>&lt;domain type='kvm'&gt;
624 &lt;name&gt;vm_sriov1&lt;/name&gt;
625 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
626 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
627 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
628 &lt;memoryBacking&gt;
629 &lt;hugepages&gt;
630 &lt;page size='1' unit='G' nodeset='0'/&gt;
631 &lt;/hugepages&gt;
632 &lt;/memoryBacking&gt;
633 &lt;vcpu&gt;2&lt;/vcpu&gt;
634 &lt;os&gt;
635 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
636 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
637 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
638 &lt;boot dev='hd'/&gt;
639 &lt;/os&gt;
640 &lt;features&gt;
641 &lt;acpi/&gt;
642 &lt;apic/&gt;
643 &lt;/features&gt;
644 &lt;cpu mode='host-model'&gt;
645 &lt;model fallback='allow'/&gt;
646 &lt;topology sockets='1' cores='2' threads='1'/&gt;
647 &lt;numa&gt;
648 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
649 &lt;/numa&gt;
650 &lt;/cpu&gt;
651 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
652 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
653 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
654 &lt;devices&gt;
655 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
656 &lt;disk type='file' device='disk'&gt;
657 &lt;driver name='qemu' type='raw' cache='none'/&gt;
658 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
659 &lt;target dev='vda' bus='virtio'/&gt;
660 &lt;/disk&gt;
661 &lt;interface type='hostdev' managed='yes'&gt;
662 &lt;source&gt;
663 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
664 &lt;/source&gt;
665 &lt;mac address='52:54:00:6d:90:02'/&gt;
666 &lt;/interface&gt;
667 &lt;serial type='pty'&gt;
668 &lt;target port='0'/&gt;
669 &lt;/serial&gt;
670 &lt;console type='pty'&gt;
671 &lt;target type='serial' port='0'/&gt;
672 &lt;/console&gt;
673 &lt;/devices&gt;
674&lt;/domain&gt;</programlisting>
675 </section>
676
677 <section id="guest-config-bridge-interface">
678 <title>Guest configuration with bridge interface</title>
679
680 <programlisting>&lt;domain type='kvm'&gt;
681 &lt;name&gt;vm_bridge&lt;/name&gt;
682 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
683 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
684 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
685 &lt;memoryBacking&gt;
686 &lt;hugepages&gt;
687 &lt;page size='1' unit='G' nodeset='0'/&gt;
688 &lt;/hugepages&gt;
689 &lt;/memoryBacking&gt;
690 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
691 &lt;cputune&gt;
692 &lt;shares&gt;4096&lt;/shares&gt;
693 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
694 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
695 &lt;emulatorpin cpuset='4,5'/&gt;
696 &lt;/cputune&gt;
697 &lt;os&gt;
698 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
699 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
700 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
701 &lt;boot dev='hd'/&gt;
702 &lt;/os&gt;
703 &lt;features&gt;
704 &lt;acpi/&gt;
705 &lt;apic/&gt;
706 &lt;/features&gt;
707 &lt;cpu mode='host-model'&gt;
708 &lt;model fallback='allow'/&gt;
709 &lt;topology sockets='2' cores='1' threads='1'/&gt;
710 &lt;numa&gt;
711 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
712 &lt;/numa&gt;
713 &lt;/cpu&gt;
714 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
715 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
716 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
717 &lt;devices&gt;
718 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
719 &lt;disk type='file' device='disk'&gt;
720 &lt;driver name='qemu' type='raw' cache='none'/&gt;
721 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
722 &lt;target dev='vda' bus='virtio'/&gt;
723 &lt;/disk&gt;
724 &lt;interface type='bridge'&gt;
725 &lt;mac address='52:54:00:71:b1:b6'/&gt;
726 &lt;source bridge='ovsbr0'/&gt;
727 &lt;virtualport type='openvswitch'/&gt;
728 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
729 &lt;/interface&gt;
730 &lt;serial type='pty'&gt;
731 &lt;target port='0'/&gt;
732 &lt;/serial&gt;
733 &lt;console type='pty'&gt;
734 &lt;target type='serial' port='0'/&gt;
735 &lt;/console&gt;
736 &lt;/devices&gt;
737&lt;/domain&gt;</programlisting>
738 </section>
739 </section>
327 </section> 740 </section>
328</chapter> \ No newline at end of file 741</chapter> \ No newline at end of file