summaryrefslogtreecommitdiffstats
path: root/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml')
-rw-r--r--doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml741
1 files changed, 741 insertions, 0 deletions
diff --git a/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml
new file mode 100644
index 0000000..f7f186c
--- /dev/null
+++ b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml
@@ -0,0 +1,741 @@
1<?xml version="1.0" encoding="ISO-8859-1"?>
2<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
3"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
4<chapter id="hypervisor_virt">
5 <title>Hypervisor Virtualization</title>
6
7 <para>The KVM, Kernel-based Virtual Machine, is a virtualization
8 infrastructure for the Linux kernel which turns it into a hypervisor. KVM
9 requires a processor with a hardware virtualization extension.</para>
10
11 <para>KVM uses QEMU, an open source machine emulator and virtualizer, to
12 virtualize a complete system. With KVM it is possible to run multiple guests
13 of a variety of operating systems, each with a complete set of virtualized
14 hardware.</para>
15
16 <section id="launch_virt_machine">
17 <title>Launching a Virtual Machine</title>
18
19 <para>QEMU can make use of KVM when running a target architecture that is
20 the same as the host architecture. For instance, when running
21 qemu-system-x86_64 on an x86-64 compatible processor (containing
22 virtualization extensions Intel VT or AMD-V), you can take advantage of
23 the KVM acceleration, giving you benefit for your host and your guest
24 system.</para>
25
26 <para>Enea Linux includes an optimizied version of QEMU with KVM-only
27 support. To use KVM pass<command> --enable-kvm</command> to QEMU.</para>
28
29 <para>The following is an example of starting a guest:</para>
30
31 <programlisting>taskset -c 0,1 qemu-system-x86_64 \
32-cpu host -M q35 -smp cores=2,sockets=1 \
33-vcpu 0,affinity=0 -vcpu 1,affinity=1 \
34-enable-kvm -nographic \
35-kernel bzImage \
36-drive file=enea-image-virtualization-guest-qemux86-64.ext4,if=virtio,format=raw \
37-append 'root=/dev/vda console=ttyS0,115200' \
38-m 4096 \
39-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
40-numa node,memdev=mem -mem-prealloc</programlisting>
41 </section>
42
43 <section id="qemu_boot">
44 <title>Main QEMU boot options</title>
45
46 <para>Below are detailed all the pertinent boot options for the QEMU
47 emulator:</para>
48
49 <itemizedlist>
50 <listitem>
51 <para>SMP - at least 2 cores should be enabled in order to isolate
52 application(s) running in virtual machine(s) on specific cores for
53 better performance.</para>
54
55 <programlisting>-smp cores=2,threads=1,sockets=1 \</programlisting>
56 </listitem>
57
58 <listitem>
59 <para>CPU affinity - associate virtual CPUs with physical CPUs and
60 optionally assign a default real time priority to the virtual CPU
61 process in the host kernel. This option allows you to start qemu vCPUs
62 on isolated physical CPUs.</para>
63
64 <programlisting>-vcpu 0,affinity=0 \</programlisting>
65 </listitem>
66
67 <listitem>
68 <para>Hugepages - KVM guests can be deployed with huge page memory
69 support in order to reduce memory consumption and improve performance,
70 by reducing CPU cache usage. By using huge pages for a KVM guest, less
71 memory is used for page tables and TLB (Translation Lookaside Buffer)
72 misses are reduced, thereby significantly increasing performance,
73 especially for memory-intensive situations.</para>
74
75 <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \</programlisting>
76 </listitem>
77
78 <listitem>
79 <para>Memory preallocation - preallocate huge pages at startup time
80 can improve performance but it may affect the qemu boot time.</para>
81
82 <programlisting>-mem-prealloc \</programlisting>
83 </listitem>
84
85 <listitem>
86 <para>Enable realtime characteristics - run qemu with realtime
87 features. While that mildly implies that "-realtime" alone might do
88 something, it's just an identifier for options that are partially
89 realtime. If you're running in a realtime or low latency environment,
90 you don't want your pages to be swapped out and mlock does that, thus
91 mlock=on. If you want VM density, then you may want swappable VMs,
92 thus mlock=off.</para>
93
94 <programlisting>-realtime mlock=on \</programlisting>
95 </listitem>
96 </itemizedlist>
97
98 <para>If the hardware does not have an IOMMU (known as "Intel VT-d" on
99 Intel-based machines and "AMD I/O Virtualization Technology" on AMD-based
100 machines), it will not be possible to assign devices in KVM.
101 Virtualization Technology features (VT-d, VT-x, etc.) must be enabled from
102 BIOS on the host target before starting a virtual machine.</para>
103 </section>
104
105 <section id="net_in_guest">
106 <title>Networking in guest</title>
107
108 <section id="vhost-user-support">
109 <title>Using vhost-user support</title>
110
111 <para>The goal of vhost-user is to implement a Virtio transport, staying
112 as close as possible to the vhost paradigm of using shared memory,
113 ioeventfds and irqfds. A UNIX domain socket based mechanism allows the
114 set up of resources used by a number of Vrings shared between two
115 userspace processes, which will be placed in shared memory.</para>
116
117 <para>To run QEMU with the vhost-user backend, you have to provide the
118 named UNIX domain socket which needs to be already opened by the
119 backend:</para>
120
121 <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
122-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \
123-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
124-device virtio-net-pci,netdev=mynet1,mac=52:54:00:00:00:01 \</programlisting>
125
126 <para>The vHost User standard uses a client-server model. The server
127 creates and manages the vHost User sockets and the client connects to
128 the sockets created by the server. It is recommended to use QEMU as
129 server so the vhost-user client can be restarted without affecting the
130 server, otherwise if the server side dies all clients need to be
131 restarted.</para>
132
133 <para>Using vhost-user in QEMU as server will offer the flexibility to
134 stop and start the virtual machine with no impact on virtual switch from
135 the host (vhost-user-client).</para>
136
137 <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1,server \</programlisting>
138 </section>
139
140 <section id="tap-interface">
141 <title>Using TAP Interfaces</title>
142
143 <para>QEMU can use TAP interfaces to provide full networking capability
144 for the guest OS:</para>
145
146 <programlisting>-netdev tap,id=net0,ifname=tap0,script=no,downscript=no \
147-device virtio-net-pci,netdev=net0,mac=22:EA:FB:A8:25:AE \</programlisting>
148 </section>
149
150 <section id="vfio-passthrough">
151 <title>VFIO passthrough VF (SR-IOV) to guest</title>
152
153 <para>KVM hypervisor support for attaching PCI devices on the host
154 system to guests. PCI passthrough allows guests to have exclusive access
155 to PCI devices for a range of tasks. PCI passthrough allows PCI devices
156 to appear and behave as if they were physically attached to the guest
157 operating system.</para>
158
159 <para>Preparing an Intel system for PCI passthrough:</para>
160
161 <itemizedlist>
162 <listitem>
163 <para>Enable the Intel VT-d extensions in BIOS</para>
164 </listitem>
165
166 <listitem>
167 <para>Activate Intel VT-d in the kernel by using
168 <literal>intel_iommu=on</literal> as a kernel boot parameter</para>
169 </listitem>
170
171 <listitem>
172 <para>Allow unsafe interrupts in case the system doesn't support
173 interrupt remapping. This can be done using
174 <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as a
175 boot kernel parameter.</para>
176 </listitem>
177 </itemizedlist>
178
179 <para>Create guest with direct passthrough via VFIO framework like
180 so:</para>
181
182 <programlisting>-device vfio-pci,host=0000:03:10.2 \</programlisting>
183
184 <para>On the host, one or more VirtualFunctions (VFs) must be created in
185 order to be allocated for a guest network to access, before starting
186 QEMU:</para>
187
188 <programlisting>$ echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
189$ modprobe vfio_pci
190$ dpdk-devbind.py --bind=vfio-pci 0000:03:10.2</programlisting>
191 </section>
192
193 <section id="multiqueue">
194 <title>Multi-queue</title>
195
196 <section id="qemu-multiqueue-support">
197 <title>QEMU multi queue support configuration</title>
198
199 <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \
200-netdev type=vhost-user,id=net0,chardev=char0,queues=2 \
201-device virtio-net-pci,netdev=net0,mac=22:EA:FB:A8:25:AE,mq=on,vectors=6
202where vectors is calculated as: 2 + 2 * queues number.</programlisting>
203 </section>
204
205 <section id="inside-guest">
206 <title>Inside guest</title>
207
208 <para>Linux kernel virtio-net driver (one queue is enabled by
209 default):</para>
210
211 <programlisting>$ ethtool -L combined 2 eth0
212DPDK Virtio PMD
213$ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting>
214
215 <para>For QEMU documentation please see: <ulink
216 url="https://qemu.weilnetz.de/doc/qemu-doc.html">https://qemu.weilnetz.de/doc/qemu-doc.html</ulink>.</para>
217 </section>
218 </section>
219 </section>
220
221 <section id="libvirt">
222 <title>Libvirt</title>
223
224 <para>One way to manage guests in Enea NFV Access is by using
225 <literal>libvirt</literal>. Libvirt is used in conjunction with a daemon
226 (<literal>libvirtd</literal>) and a command line utility (virsh) to manage
227 virtualized environments.</para>
228
229 <para>The libvirt library is a hypervisor-independent virtualization API
230 and toolkit that is able to interact with the virtualization capabilities
231 of a range of operating systems. Libvirt provides a common, generic and
232 stable layer to securely manage domains on a node. As nodes may be
233 remotely located, libvirt provides all methods required to provision,
234 create, modify, monitor, control, migrate and stop the domains, within the
235 limits of hypervisor support for these operations.</para>
236
237 <para>The libvirt daemon runs on the Enea NFV Access host. All tools built
238 on libvirt API connect to the daemon to request the desired operation, and
239 to collect information about the configuration and resources of the host
240 system and guests. <literal>virsh</literal> is a command line interface
241 tool for managing guests and the hypervisor. The virsh tool is built on
242 the libvirt management API.</para>
243
244 <para><emphasis role="bold">Major functionality provided by
245 libvirt</emphasis></para>
246
247 <para>The following is a summary from the libvirt <ulink
248 url="http://wiki.libvirt.org/page/FAQ#What_is_libvirt.3F">home
249 page</ulink> describing the major libvirt features:</para>
250
251 <itemizedlist>
252 <listitem>
253 <para><emphasis role="bold">VM management:</emphasis> Various domain
254 lifecycle operations such as start, stop, pause, save, restore, and
255 migrate. Hotplug operations for many device types including disk and
256 network interfaces, memory, and cpus.</para>
257 </listitem>
258
259 <listitem>
260 <para><emphasis role="bold">Remote machine support:</emphasis> All
261 libvirt functionality is accessible on any machine running the libvirt
262 daemon, including remote machines. A variety of network transports are
263 supported for connecting remotely, with the simplest being
264 <literal>SSH</literal>, which requires no extra explicit
265 configuration. For more information, see: <ulink
266 url="http://libvirt.org/remote.html">http://libvirt.org/remote.html</ulink>.</para>
267 </listitem>
268
269 <listitem>
270 <para><emphasis role="bold">Network interface management:</emphasis>
271 Any host running the libvirt daemon can be used to manage physical and
272 logical network interfaces. Enumerate existing interfaces, as well as
273 configure (and create) interfaces, bridges, vlans, and bond devices.
274 For more details see: <ulink
275 url="https://fedorahosted.org/netcf/">https://fedorahosted.org/netcf/</ulink>.</para>
276 </listitem>
277
278 <listitem>
279 <para><emphasis role="bold">Virtual NAT and Route based
280 networking:</emphasis> Any host running the libvirt daemon can manage
281 and create virtual networks. Libvirt virtual networks use firewall
282 rules to act as a router, providing VMs transparent access to the host
283 machines network. For more information, see: <ulink
284 url="http://libvirt.org/archnetwork.html">http://libvirt.org/archnetwork.html</ulink>.</para>
285 </listitem>
286
287 <listitem>
288 <para><emphasis role="bold">Storage management:</emphasis> Any host
289 running the libvirt daemon can be used to manage various types of
290 storage: create file images of various formats (raw, qcow2, etc.),
291 mount NFS shares, enumerate existing LVM volume groups, create new LVM
292 volume groups and logical volumes, partition raw disk devices, mount
293 iSCSI shares, and much more. For more details, see: <ulink
294 url="http://libvirt.org/storage.html">http://libvirt.org/storage.html</ulink>.</para>
295 </listitem>
296
297 <listitem>
298 <para><emphasis role="bold">Libvirt Configuration:</emphasis> A
299 properly running libvirt requires that the following elements be in
300 place:</para>
301
302 <itemizedlist>
303 <listitem>
304 <para>Configuration files, located in the directory
305 <literal>/etc/libvirt</literal>. They include the daemon's
306 configuration file <literal>libvirtd.conf</literal>, and
307 hypervisor-specific configuration files, like
308 <literal>qemu.conf</literal> for the QEMU.</para>
309 </listitem>
310
311 <listitem>
312 <para>A running libvirtd daemon. The daemon is started
313 automatically in Enea NFV Access host.</para>
314 </listitem>
315
316 <listitem>
317 <para>Configuration files for the libvirt domains, or guests, to
318 be managed by the KVM host. The specifics for guest domains shall
319 be defined in an XML file of a format specified at <ulink
320 url="http://libvirt.org/formatdomain.html">http://libvirt.org/formatdomain.html</ulink>.
321 XML formats for other structures are specified at <ulink type=""
322 url="http://libvirt.org/format.html">http://libvirt.org/format.html</ulink>.</para>
323 </listitem>
324 </itemizedlist>
325 </listitem>
326 </itemizedlist>
327
328 <section id="boot-kvm-guest">
329 <title>Booting a KVM Guest</title>
330
331 <para>There are several ways to boot a KVM guest. Here we describe how
332 to boot using a raw image. A direct kernel boot can be performed by
333 transferring the guest kernel and the file system files to the host and
334 specifying a <literal>&lt;kernel&gt;</literal> and an
335 <literal>&lt;initrd&gt;</literal> element inside the
336 <literal>&lt;os&gt;</literal> element of the guest XML file, as in the
337 following example:</para>
338
339 <programlisting>&lt;os&gt;
340 &lt;kernel&gt;bzImage&lt;/kernel&gt;
341&lt;/os&gt;
342&lt;devices&gt;
343 &lt;disk type='file' device='disk'&gt;
344 &lt;driver name='qemu' type='raw' cache='none'/&gt;
345 &lt;source file='enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
346 &lt;target dev='vda' bus='virtio'/&gt;
347 &lt;/disk&gt;
348&lt;/devices&gt;</programlisting>
349 </section>
350
351 <section id="start-guest">
352 <title>Starting a Guest</title>
353
354 <para>Command <command>virsh create</command> starts a guest:</para>
355
356 <programlisting>virsh create example-guest-x86.xml</programlisting>
357
358 <para>If further configurations are needed before the guest is reachable
359 through <literal>ssh</literal>, a console can be started using command
360 <command>virsh console</command>. The example below shows how to start a
361 console where kvm-example-guest is the name of the guest defined in the
362 guest XML file:</para>
363
364 <programlisting>virsh console kvm-example-guest</programlisting>
365
366 <para>This requires that the guest domain has a console configured in
367 the guest XML file:</para>
368
369 <programlisting>&lt;os&gt;
370 &lt;cmdline&gt;console=ttyS0,115200&lt;/cmdline&gt;
371&lt;/os&gt;
372&lt;devices&gt;
373 &lt;console type='pty'&gt;
374 &lt;target type='serial' port='0'/&gt;
375 &lt;/console&gt;
376&lt;/devices&gt;</programlisting>
377 </section>
378
379 <section id="isolation">
380 <title>Isolation</title>
381
382 <para>It may be desirable to isolate execution in a guest, to a specific
383 guest core. It might also be desirable to run a guest on a specific host
384 core.</para>
385
386 <para>To pin the virtual CPUs of the guest to specific cores, configure
387 the <literal>&lt;cputune&gt;</literal> contents as follows:</para>
388
389 <orderedlist>
390 <listitem>
391 <para>First explicitly state on which host core each guest core
392 shall run, by mapping <literal>vcpu</literal> to
393 <literal>cpuset</literal> in the <literal>&lt;vcpupin&gt;</literal>
394 tag.</para>
395 </listitem>
396
397 <listitem>
398 <para>In the <literal>&lt;cputune&gt;</literal> tag it is further
399 possible to specify on which CPU the emulator shall run by adding
400 the cpuset to the <literal>&lt;emulatorpin&gt;</literal> tag.</para>
401
402 <programlisting>&lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
403&lt;cputune&gt;
404 &lt;vcpupin vcpu='0' cpuset='2'/&gt;
405 &lt;vcpupin vcpu='1' cpuset='3'/&gt;
406 &lt;emulatorpin cpuset="2"/&gt;
407&lt;/cputune&gt;</programlisting>
408
409 <para><literal>libvirt</literal> will group all threads belonging to
410 a qemu instance into cgroups that will be created for that purpose.
411 It is possible to supply a base name for those cgroups using the
412 <literal>&lt;resource&gt;</literal> tag:</para>
413
414 <programlisting>&lt;resource&gt;
415 &lt;partition&gt;/rt&lt;/partition&gt;
416&lt;/resource&gt;</programlisting>
417 </listitem>
418 </orderedlist>
419 </section>
420
421 <section id="network-libvirt">
422 <title>Networking using libvirt</title>
423
424 <para>Command <command>virsh net-create</command> starts a network. If
425 any networks are listed in the guest XML file, those networks must be
426 started before the guest is started. As an example, if the network is
427 defined in a file named example-net.xml, it is started as
428 follows:</para>
429
430 <programlisting>virsh net-create example-net.xml
431&lt;network&gt;
432 &lt;name&gt;sriov&lt;/name&gt;
433 &lt;forward mode='hostdev' managed='yes'&gt;
434 &lt;pf dev='eno3'/&gt;
435 &lt;/forward&gt;
436&lt;/network&gt;</programlisting>
437
438 <para><literal>libvirt</literal> is a virtualization API that supports
439 virtual network creation. These networks can be connected to guests and
440 containers by referencing the network in the guest XML file. It is
441 possible to have a virtual network persistently running on the host by
442 starting the network with command <command>virsh net-define</command>
443 instead of the previously mentioned <command>virsh
444 net-create</command>.</para>
445
446 <para>An example for the sample network defined in
447 <literal>meta-vt/recipes-example/virt-example/files/example-net.xml</literal>:</para>
448
449 <programlisting>virsh net-define example-net.xml</programlisting>
450
451 <para>Command <command>virsh net-autostart</command> enables a
452 persistent network to start automatically when the libvirt daemon
453 starts:</para>
454
455 <programlisting>virsh net-autostart example-net</programlisting>
456
457 <para>Guest configuration file (xml) must be updated to access newly
458 created network like so:</para>
459
460 <programlisting> &lt;interface type='network'&gt;
461 &lt;source network='sriov'/&gt;
462 &lt;/interface&gt;</programlisting>
463
464 <para>The following presented here are a few modes of network access
465 from guest using <command>virsh</command>:</para>
466
467 <itemizedlist>
468 <listitem>
469 <para><emphasis role="bold">vhost-user interface</emphasis></para>
470
471 <para>See the Open vSwitch chapter on how to create vhost-user
472 interface using Open vSwitch. Currently there is no Open vSwitch
473 support for networks that are managed by libvirt (e.g. NAT). As of
474 now, only bridged networks are supported (those where the user has
475 to manually create the bridge).</para>
476
477 <programlisting> &lt;interface type='vhostuser'&gt;
478 &lt;mac address='00:00:00:00:00:01'/&gt;
479 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
480 &lt;model type='virtio'/&gt;
481 &lt;driver queues='1'&gt;
482 &lt;host mrg_rxbuf='off'/&gt;
483 &lt;/driver&gt;
484 &lt;/interface&gt;</programlisting>
485 </listitem>
486
487 <listitem>
488 <para><emphasis role="bold">PCI passthrough
489 (SR-IOV)</emphasis></para>
490
491 <para>KVM hypervisor support for attaching PCI devices on the host
492 system to guests. PCI passthrough allows guests to have exclusive
493 access to PCI devices for a range of tasks. PCI passthrough allows
494 PCI devices to appear and behave as if they were physically attached
495 to the guest operating system.</para>
496
497 <para>Preparing an Intel system for PCI passthrough is done like
498 so:</para>
499
500 <itemizedlist>
501 <listitem>
502 <para>Enable the Intel VT-d extensions in BIOS</para>
503 </listitem>
504
505 <listitem>
506 <para>Activate Intel VT-d in the kernel by using
507 <literal>intel_iommu=on</literal> as a kernel boot
508 parameter</para>
509 </listitem>
510
511 <listitem>
512 <para>Allow unsafe interrupts in case the system doesn't support
513 interrupt remapping. This can be done using
514 <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as
515 a boot kernel parameter.</para>
516 </listitem>
517 </itemizedlist>
518
519 <para>VFs must be created on the host before starting the
520 guest:</para>
521
522 <programlisting>$ echo 2 &gt; /sys/class/net/eno3/device/sriov_numvfs
523$ modprobe vfio_pci
524$ dpdk-devbind.py --bind=vfio-pci 0000:03:10.0
525 &lt;interface type='hostdev' managed='yes'&gt;
526 &lt;source&gt;
527 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
528 &lt;/source&gt;
529 &lt;mac address='52:54:00:6d:90:02'/&gt;
530 &lt;/interface&gt;</programlisting>
531 </listitem>
532
533 <listitem>
534 <para><emphasis role="bold">Bridge interface</emphasis></para>
535
536 <para>In case an OVS bridge exists on host, it can be used to
537 connect the guest:</para>
538
539 <programlisting> &lt;interface type='bridge'&gt;
540 &lt;mac address='52:54:00:71:b1:b6'/&gt;
541 &lt;source bridge='ovsbr0'/&gt;
542 &lt;virtualport type='openvswitch'/&gt;
543 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
544 &lt;/interface&gt;</programlisting>
545
546 <para>For further details on the network XML format, see <ulink
547 url="http://libvirt.org/formatnetwork.html">http://libvirt.org/formatnetwork.html</ulink>.</para>
548 </listitem>
549 </itemizedlist>
550 </section>
551
552 <section id="libvirt-guest-config-ex">
553 <title>Libvirt guest configuration examples</title>
554
555 <section id="guest-config-vhost-user-interface">
556 <title>Guest configuration with vhost-user interface</title>
557
558 <programlisting>&lt;domain type='kvm'&gt;
559 &lt;name&gt;vm_vhost&lt;/name&gt;
560 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
561 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
562 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
563 &lt;memoryBacking&gt;
564 &lt;hugepages&gt;
565 &lt;page size='1' unit='G' nodeset='0'/&gt;
566 &lt;/hugepages&gt;
567 &lt;/memoryBacking&gt;
568 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
569 &lt;cputune&gt;
570 &lt;shares&gt;4096&lt;/shares&gt;
571 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
572 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
573 &lt;emulatorpin cpuset='4,5'/&gt;
574 &lt;/cputune&gt;
575 &lt;os&gt;
576 &lt;type arch='x86_64' machine='pc'&gt;hvm&lt;/type&gt;
577 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
578 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
579 &lt;boot dev='hd'/&gt;
580 &lt;/os&gt;
581 &lt;features&gt;
582 &lt;acpi/&gt;
583 &lt;apic/&gt;
584 &lt;/features&gt;
585 &lt;cpu mode='host-model'&gt;
586 &lt;model fallback='allow'/&gt;
587 &lt;topology sockets='2' cores='1' threads='1'/&gt;
588 &lt;numa&gt;
589 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
590 &lt;/numa&gt;
591 &lt;/cpu&gt;
592 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
593 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
594 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
595 &lt;devices&gt;
596 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
597 &lt;disk type='file' device='disk'&gt;
598 &lt;driver name='qemu' type='raw' cache='none'/&gt;
599 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
600 &lt;target dev='vda' bus='virtio'/&gt;
601 &lt;/disk&gt;
602 &lt;interface type='vhostuser'&gt;
603 &lt;mac address='00:00:00:00:00:01'/&gt;
604 &lt;source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/&gt;
605 &lt;model type='virtio'/&gt;
606 &lt;driver queues='1'&gt;
607 &lt;host mrg_rxbuf='off'/&gt;
608 &lt;/driver&gt;
609 &lt;/interface&gt;
610 &lt;serial type='pty'&gt;
611 &lt;target port='0'/&gt;
612 &lt;/serial&gt;
613 &lt;console type='pty'&gt;
614 &lt;target type='serial' port='0'/&gt;
615 &lt;/console&gt;
616 &lt;/devices&gt;
617&lt;/domain&gt;</programlisting>
618 </section>
619
620 <section id="guest-config-pci-passthrough">
621 <title>Guest configuration with PCI passthrough</title>
622
623 <programlisting>&lt;domain type='kvm'&gt;
624 &lt;name&gt;vm_sriov1&lt;/name&gt;
625 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
626 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
627 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
628 &lt;memoryBacking&gt;
629 &lt;hugepages&gt;
630 &lt;page size='1' unit='G' nodeset='0'/&gt;
631 &lt;/hugepages&gt;
632 &lt;/memoryBacking&gt;
633 &lt;vcpu&gt;2&lt;/vcpu&gt;
634 &lt;os&gt;
635 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
636 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
637 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
638 &lt;boot dev='hd'/&gt;
639 &lt;/os&gt;
640 &lt;features&gt;
641 &lt;acpi/&gt;
642 &lt;apic/&gt;
643 &lt;/features&gt;
644 &lt;cpu mode='host-model'&gt;
645 &lt;model fallback='allow'/&gt;
646 &lt;topology sockets='1' cores='2' threads='1'/&gt;
647 &lt;numa&gt;
648 &lt;cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/&gt;
649 &lt;/numa&gt;
650 &lt;/cpu&gt;
651 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
652 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
653 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
654 &lt;devices&gt;
655 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
656 &lt;disk type='file' device='disk'&gt;
657 &lt;driver name='qemu' type='raw' cache='none'/&gt;
658 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
659 &lt;target dev='vda' bus='virtio'/&gt;
660 &lt;/disk&gt;
661 &lt;interface type='hostdev' managed='yes'&gt;
662 &lt;source&gt;
663 &lt;address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/&gt;
664 &lt;/source&gt;
665 &lt;mac address='52:54:00:6d:90:02'/&gt;
666 &lt;/interface&gt;
667 &lt;serial type='pty'&gt;
668 &lt;target port='0'/&gt;
669 &lt;/serial&gt;
670 &lt;console type='pty'&gt;
671 &lt;target type='serial' port='0'/&gt;
672 &lt;/console&gt;
673 &lt;/devices&gt;
674&lt;/domain&gt;</programlisting>
675 </section>
676
677 <section id="guest-config-bridge-interface">
678 <title>Guest configuration with bridge interface</title>
679
680 <programlisting>&lt;domain type='kvm'&gt;
681 &lt;name&gt;vm_bridge&lt;/name&gt;
682 &lt;uuid&gt;4a9b3f53-fa2a-47f3-a757-dd87720d9d1d&lt;/uuid&gt;
683 &lt;memory unit='KiB'&gt;4194304&lt;/memory&gt;
684 &lt;currentMemory unit='KiB'&gt;4194304&lt;/currentMemory&gt;
685 &lt;memoryBacking&gt;
686 &lt;hugepages&gt;
687 &lt;page size='1' unit='G' nodeset='0'/&gt;
688 &lt;/hugepages&gt;
689 &lt;/memoryBacking&gt;
690 &lt;vcpu placement='static'&gt;2&lt;/vcpu&gt;
691 &lt;cputune&gt;
692 &lt;shares&gt;4096&lt;/shares&gt;
693 &lt;vcpupin vcpu='0' cpuset='4'/&gt;
694 &lt;vcpupin vcpu='1' cpuset='5'/&gt;
695 &lt;emulatorpin cpuset='4,5'/&gt;
696 &lt;/cputune&gt;
697 &lt;os&gt;
698 &lt;type arch='x86_64' machine='q35'&gt;hvm&lt;/type&gt;
699 &lt;kernel&gt;/mnt/qemu/bzImage&lt;/kernel&gt;
700 &lt;cmdline&gt;root=/dev/vda console=ttyS0,115200&lt;/cmdline&gt;
701 &lt;boot dev='hd'/&gt;
702 &lt;/os&gt;
703 &lt;features&gt;
704 &lt;acpi/&gt;
705 &lt;apic/&gt;
706 &lt;/features&gt;
707 &lt;cpu mode='host-model'&gt;
708 &lt;model fallback='allow'/&gt;
709 &lt;topology sockets='2' cores='1' threads='1'/&gt;
710 &lt;numa&gt;
711 &lt;cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/&gt;
712 &lt;/numa&gt;
713 &lt;/cpu&gt;
714 &lt;on_poweroff&gt;destroy&lt;/on_poweroff&gt;
715 &lt;on_reboot&gt;restart&lt;/on_reboot&gt;
716 &lt;on_crash&gt;destroy&lt;/on_crash&gt;
717 &lt;devices&gt;
718 &lt;emulator&gt;/usr/bin/qemu-system-x86_64&lt;/emulator&gt;
719 &lt;disk type='file' device='disk'&gt;
720 &lt;driver name='qemu' type='raw' cache='none'/&gt;
721 &lt;source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/&gt;
722 &lt;target dev='vda' bus='virtio'/&gt;
723 &lt;/disk&gt;
724 &lt;interface type='bridge'&gt;
725 &lt;mac address='52:54:00:71:b1:b6'/&gt;
726 &lt;source bridge='ovsbr0'/&gt;
727 &lt;virtualport type='openvswitch'/&gt;
728 &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&gt;
729 &lt;/interface&gt;
730 &lt;serial type='pty'&gt;
731 &lt;target port='0'/&gt;
732 &lt;/serial&gt;
733 &lt;console type='pty'&gt;
734 &lt;target type='serial' port='0'/&gt;
735 &lt;/console&gt;
736 &lt;/devices&gt;
737&lt;/domain&gt;</programlisting>
738 </section>
739 </section>
740 </section>
741</chapter> \ No newline at end of file