diff options
Diffstat (limited to 'doc')
4 files changed, 759 insertions, 6 deletions
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml index 0db4fa4..5d6e268 100644 --- a/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml +++ b/doc/book-enea-nfv-access-platform-guide/doc/benchmarks.xml | |||
| @@ -1,7 +1,7 @@ | |||
| 1 | <?xml version="1.0" encoding="ISO-8859-1"?> | 1 | <?xml version="1.0" encoding="ISO-8859-1"?> |
| 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" | 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" |
| 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> | 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> |
| 4 | <chapter id="workflow"> | 4 | <chapter condition="hidden" id="benchmarks"> |
| 5 | <title>Benchmarks</title> | 5 | <title>Benchmarks</title> |
| 6 | 6 | ||
| 7 | <para></para> | 7 | <para></para> |
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml b/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml index 6f74061..c6ce223 100644 --- a/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml +++ b/doc/book-enea-nfv-access-platform-guide/doc/container_virtualization.xml | |||
| @@ -1,8 +1,138 @@ | |||
| 1 | <?xml version="1.0" encoding="UTF-8"?> | 1 | <?xml version="1.0" encoding="ISO-8859-1"?> |
| 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" | 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" |
| 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> | 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> |
| 4 | <chapter condition="hidden" id="workflow"> | 4 | <chapter id="container-virtualization"> |
| 5 | <title>Container Virtualization</title> | 5 | <title>Container Virtualization</title> |
| 6 | 6 | ||
| 7 | <para></para> | 7 | <section id="docker"> |
| 8 | <title>Docker</title> | ||
| 9 | |||
| 10 | <para>Docker is an open-source project that automates the deployment of | ||
| 11 | applications inside software containers, by providing an additional layer | ||
| 12 | of abstraction and automation of operating-system-level virtualization on | ||
| 13 | Linux.</para> | ||
| 14 | |||
| 15 | <para>The software container mechanism uses resource isolation features | ||
| 16 | inside the Linux kernel, such as cgroups and kernel namespaces to allow | ||
| 17 | multiple containers to run within a single Linux instance, avoiding the | ||
| 18 | overhead of starting and maintaining virtual machines. </para> | ||
| 19 | |||
| 20 | <para>Containers are lightweight and include everything needed to run | ||
| 21 | themselves: code, runtime, system tools, system libraries and settings. | ||
| 22 | The main advantage provided by containers is that the encapsulated | ||
| 23 | software is isolated from its surroundings. For example, differences | ||
| 24 | between development and staging environments can be kept separate in order | ||
| 25 | to reduce conflicts between teams running different software on the same | ||
| 26 | infrastructure. </para> | ||
| 27 | |||
| 28 | <para>For a better understanding of what Docker is and how it works, the | ||
| 29 | official documentation provided on the Docker website should be consulted: | ||
| 30 | <ulink | ||
| 31 | url="https://docs.docker.com/">https://docs.docker.com/</ulink>.</para> | ||
| 32 | |||
| 33 | <section id="launch-docker-container"> | ||
| 34 | <title>Launching a Docker container</title> | ||
| 35 | |||
| 36 | <para>Docker provides a hello-world container which checks whether your | ||
| 37 | system is running the daemon correctly. This container can be launched | ||
| 38 | by simply running:</para> | ||
| 39 | |||
| 40 | <programlisting>>docker run hello-world | ||
| 41 | |||
| 42 | Hello from Docker!</programlisting> | ||
| 43 | |||
| 44 | <para>This message shows that your installation appears to be working | ||
| 45 | correctly.</para> | ||
| 46 | </section> | ||
| 47 | |||
| 48 | <section id="run-enfv-guest-image"> | ||
| 49 | <title>Run an Enea NFV Access Platform guest image</title> | ||
| 50 | |||
| 51 | <para>Enea NFV Access Platform guest images can run inside Docker as any | ||
| 52 | other container can. Before starting an Enea NFV Access Platform guest | ||
| 53 | image, a root filesystem has to be imported in Docker:</para> | ||
| 54 | |||
| 55 | <programlisting>>docker import enea-linux-virtualization-guest-x86-64.tar.gz el7guest</programlisting> | ||
| 56 | |||
| 57 | <para>To check that the Docker image has been imported successfully, | ||
| 58 | run:</para> | ||
| 59 | |||
| 60 | <programlisting>>docker images</programlisting> | ||
| 61 | |||
| 62 | <para>Finally, start an Enea NFV Access Platform container with | ||
| 63 | <literal>bash</literal> running as the shell, by running:</para> | ||
| 64 | |||
| 65 | <programlisting>>docker run -it el7guest /bin/bash</programlisting> | ||
| 66 | </section> | ||
| 67 | |||
| 68 | <section id="attach-ext-resources-docker-containers"> | ||
| 69 | <title>Attach external resources to Docker containers</title> | ||
| 70 | |||
| 71 | <para>Any system resource present on the host machine can be attached or | ||
| 72 | accessed by a Docker container.</para> | ||
| 73 | |||
| 74 | <para>Typically, if a file or folder on the host machine needs to be | ||
| 75 | attached to a container, that container should be launched with the | ||
| 76 | <literal>-v</literal> parameter. For example, to attach the | ||
| 77 | <literal>roots</literal> home folder to a container, the command line | ||
| 78 | for Docker should have the following format:</para> | ||
| 79 | |||
| 80 | <programlisting>>docker run -it -v /home/root:/home/host_root/ el7guest /bin/bash</programlisting> | ||
| 81 | |||
| 82 | <para>To check that folders have been properly passed from the host to | ||
| 83 | the container, create a file in the source folder on the host root | ||
| 84 | filesystem and check for its existence inside the containers destination | ||
| 85 | location.</para> | ||
| 86 | |||
| 87 | <section id="attach-vhost-descriptors"> | ||
| 88 | <title>Attach vhost file descriptors</title> | ||
| 89 | |||
| 90 | <para>If OVS is running on the host and vhost file descriptors need to | ||
| 91 | be passed to the container, this can be done by either mapping the | ||
| 92 | folder where all the file descriptors are located or mapping the file | ||
| 93 | descriptor itself:</para> | ||
| 94 | |||
| 95 | <itemizedlist> | ||
| 96 | <listitem> | ||
| 97 | <para>Mapping the folder can be done as exemplified above:</para> | ||
| 98 | |||
| 99 | <programlisting>docker run -it --rm -v /var/run/openvswitch/:/var/run/openvswitch/ el7guest /bin/bash</programlisting> | ||
| 100 | </listitem> | ||
| 101 | |||
| 102 | <listitem> | ||
| 103 | <para>Mapping a file descriptor is done in a similar way, but the | ||
| 104 | <literal>-v</literal> flag needs to point directly to it:</para> | ||
| 105 | |||
| 106 | <programlisting>>docker run -it --rm -v /var/run/openvswitch/vhost-user1 el7guest /bin/bash</programlisting> | ||
| 107 | </listitem> | ||
| 108 | </itemizedlist> | ||
| 109 | </section> | ||
| 110 | |||
| 111 | <section id="attach-hugepages-mount-folders"> | ||
| 112 | <title>Attach hugepages mount folders</title> | ||
| 113 | |||
| 114 | <para>Hugepages mount folders can also be accessed by a container | ||
| 115 | similarly to how a plain folder is mapped, as shown in 1.3.</para> | ||
| 116 | |||
| 117 | <para>For example, if the host system has hugepages mounted in the | ||
| 118 | <literal>/mnt/huge</literal> location, a container can also access | ||
| 119 | hugepages by being launched with:</para> | ||
| 120 | |||
| 121 | <programlisting>>docker run -it -v /mnt/huge el7guest /bin/bash</programlisting> | ||
| 122 | </section> | ||
| 123 | |||
| 124 | <section id="access-pci-bus"> | ||
| 125 | <title>Access the PCI bus</title> | ||
| 126 | |||
| 127 | <para>If the host machine has multiple SRIOV instances created, a | ||
| 128 | container can access the instances by being given privileged access to | ||
| 129 | the host system. Unlike folders, PCI devices do not have to be mounted | ||
| 130 | explicitly in order to be accessed and will be available to the | ||
| 131 | container if the <literal>--privileged</literal> flag is passed to the | ||
| 132 | command line:</para> | ||
| 133 | |||
| 134 | <programlisting>>docker run --privileged -it el7guest /bin/bash</programlisting> | ||
| 135 | </section> | ||
| 136 | </section> | ||
| 137 | </section> | ||
| 8 | </chapter> \ No newline at end of file | 138 | </chapter> \ No newline at end of file |
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml b/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml index b534e20..aa0dcb5 100644 --- a/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml +++ b/doc/book-enea-nfv-access-platform-guide/doc/getting_started.xml | |||
| @@ -120,8 +120,218 @@ | |||
| 120 | </section> | 120 | </section> |
| 121 | 121 | ||
| 122 | <section id="prebuilt-artifacts"> | 122 | <section id="prebuilt-artifacts"> |
| 123 | <title>How to use Prebuilt Artifacts</title> | 123 | <title>How to use the Prebuilt Artifacts</title> |
| 124 | 124 | ||
| 125 | <para></para> | 125 | <section id="boot-ramdisk"> |
| 126 | <title>Booting Enea NFV Access Platform using RAMDISK</title> | ||
| 127 | |||
| 128 | <para>There may be use cases, especially at first target ramp-up, where | ||
| 129 | the HDD/SDD has no partitions and you need to prepare the disks for | ||
| 130 | boot. Booting from ramdisk can help with this task.</para> | ||
| 131 | |||
| 132 | <para>The prerequisites needed to proceed:</para> | ||
| 133 | |||
| 134 | <itemizedlist> | ||
| 135 | <listitem> | ||
| 136 | <para>Enea Linux ext4 rootfs image - | ||
| 137 | enea-image-virtualization-host-inteld1521.ext4</para> | ||
| 138 | </listitem> | ||
| 139 | |||
| 140 | <listitem> | ||
| 141 | <para>Enea Linux kernel image - bzImage</para> | ||
| 142 | </listitem> | ||
| 143 | |||
| 144 | <listitem> | ||
| 145 | <para>BIOS has PXE boot enabled</para> | ||
| 146 | </listitem> | ||
| 147 | |||
| 148 | <listitem> | ||
| 149 | <para>PXE/tftp server configured and connected (ethernet) to | ||
| 150 | target.</para> | ||
| 151 | </listitem> | ||
| 152 | </itemizedlist> | ||
| 153 | |||
| 154 | <para>Copy bzImage and enea-image-virtualization-host-inteld1521.ext4.gz | ||
| 155 | images to the tftpserver configured for PXE boot.</para> | ||
| 156 | |||
| 157 | <para>Use the following as an example for the PXE configuration | ||
| 158 | file:</para> | ||
| 159 | |||
| 160 | <programlisting>default vesamenu.c32 | ||
| 161 | prompt 1 | ||
| 162 | timeout 0 | ||
| 163 | |||
| 164 | label el_ramfs | ||
| 165 | menu label ^EneaLinux_RAMfs | ||
| 166 | kernel bzImage | ||
| 167 | append root=/dev/ram0 initrd=enea-image-virtualization-host-inteld1521.ext4 / | ||
| 168 | ramdisk_size=1200000 console=ttyS0,115200 eralyprintk=ttyS0,115200</programlisting> | ||
| 169 | |||
| 170 | <para>Restart the target. Then enter (F11) in the Boot Menu and select | ||
| 171 | the Ethernet interface used for PXE boot. From the PXE Boot Menu select | ||
| 172 | <emphasis role="bold">Enea Linux_RAMfs</emphasis>. Once the Enea NFV | ||
| 173 | Access Platform is started you can partition the HDD/SDD and install | ||
| 174 | GRUB as described in in the following section.s</para> | ||
| 175 | </section> | ||
| 176 | |||
| 177 | <section id="install-grub"> | ||
| 178 | <title>Partitioning a new harddisk and installing GRUB</title> | ||
| 179 | |||
| 180 | <para>The prerequisites needed:</para> | ||
| 181 | |||
| 182 | <itemizedlist> | ||
| 183 | <listitem> | ||
| 184 | <para>grub (<literal>grub-efi-bootx64.efi</literal>) - availalble as | ||
| 185 | a pre-built artifact under | ||
| 186 | <literal>inteld1521/images/enea-image-virtualization-host</literal>.</para> | ||
| 187 | </listitem> | ||
| 188 | |||
| 189 | <listitem> | ||
| 190 | <para><literal>e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb, | ||
| 191 | dosfstools_4.1-r0.0_amd64.deb</literal> - available under | ||
| 192 | <literal>inteld1521/deb</literal>.</para> | ||
| 193 | </listitem> | ||
| 194 | </itemizedlist> | ||
| 195 | |||
| 196 | <para>Proceed using the following steps:</para> | ||
| 197 | |||
| 198 | <orderedlist> | ||
| 199 | <listitem> | ||
| 200 | <para>Boot target with Enea NFV Access Platform from RAMDISK</para> | ||
| 201 | </listitem> | ||
| 202 | |||
| 203 | <listitem> | ||
| 204 | <para>Install prerequisite packages:</para> | ||
| 205 | |||
| 206 | <programlisting>> dpkg -i e2fsprogs-mke2fs_1.43.4-r0.0_amd64.deb | ||
| 207 | > dpkg -i dosfstools_4.1-r0.0_amd64.deb</programlisting> | ||
| 208 | </listitem> | ||
| 209 | |||
| 210 | <listitem> | ||
| 211 | <para>Partition the disk:</para> | ||
| 212 | |||
| 213 | <programlisting>> fdisk /dev/sda | ||
| 214 | fdisk> g {GPT partition type} | ||
| 215 | fdisk> n | ||
| 216 | fdisk> 1 | ||
| 217 | fdisk> {default start part} | ||
| 218 | fdisk> +512M | ||
| 219 | fdisk> t | ||
| 220 | fdisk> 1 {ESP/EFI partition} | ||
| 221 | fdisk> n | ||
| 222 | fdisk> 2 | ||
| 223 | fdisk> {default start part} | ||
| 224 | fdisk> +18G | ||
| 225 | fdisk> 3 | ||
| 226 | fdisk> {default start part} | ||
| 227 | fdisk> +20G | ||
| 228 | ... | ||
| 229 | fdisk> 7 | ||
| 230 | fdisk> {default start part} | ||
| 231 | fdisk> {default end end part} | ||
| 232 | |||
| 233 | fdisk> p {print partion table} | ||
| 234 | fdisk> w {write to disk} | ||
| 235 | fdisk> q</programlisting> | ||
| 236 | </listitem> | ||
| 237 | |||
| 238 | <listitem> | ||
| 239 | <para>Format the partitions:</para> | ||
| 240 | |||
| 241 | <programlisting>> mkfs.fat -F32 -nEFI /dev/sda1 | ||
| 242 | > mkfs.ext4 -LROOT /dev/sda2 | ||
| 243 | > mkfs.ext4 -LROOT /dev/sda3 | ||
| 244 | > mkfs.ext4 -LROOT /dev/sda4 | ||
| 245 | > mkfs.ext4 -LROOT /dev/sda5 | ||
| 246 | > mkfs.ext4 -LROOT /dev/sda6 | ||
| 247 | > mkfs.ext4 -LROOT /dev/sda7</programlisting> | ||
| 248 | </listitem> | ||
| 249 | |||
| 250 | <listitem> | ||
| 251 | <para>Create a GRUB partition:</para> | ||
| 252 | |||
| 253 | <programlisting>> mkdir /mnt/boot | ||
| 254 | > mount /dev/sda1 /mnt/boot | ||
| 255 | > mkdir -p /mnt/boot/EFI/boot | ||
| 256 | |||
| 257 | > cp grub-efi-bootx64.efi /mnt/boot/EFI/boot/bootx64.efi | ||
| 258 | > vi /mnt/boot/EFI/boot/grub.cfg | ||
| 259 | default=1 | ||
| 260 | |||
| 261 | menuentry "Linux Reference Image" { | ||
| 262 | linux (hd0,gpt2)/boot/bzImage root=/dev/sda2 ip=dhcp | ||
| 263 | } | ||
| 264 | |||
| 265 | menuentry "Linux sda3" { | ||
| 266 | linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp | ||
| 267 | } | ||
| 268 | |||
| 269 | menuentry "Linux sda4" { | ||
| 270 | linux (hd0,gpt4)/boot/bzImage root=/dev/sda4 ip=dhcp | ||
| 271 | } | ||
| 272 | |||
| 273 | menuentry "Linux sda5" { | ||
| 274 | linux (hd0,gpt5)/boot/bzImage root=/dev/sda5 ip=dhcp | ||
| 275 | } | ||
| 276 | |||
| 277 | menuentry "Linux sda6" { | ||
| 278 | linux (hd0,gpt6)/boot/bzImage root=/dev/sda6 ip=dhcp | ||
| 279 | } | ||
| 280 | |||
| 281 | menuentry "Linux sda7" { | ||
| 282 | linux (hd0,gpt7)/boot/bzImage root=/dev/sda7 ip=dhcp | ||
| 283 | }</programlisting> | ||
| 284 | </listitem> | ||
| 285 | </orderedlist> | ||
| 286 | </section> | ||
| 287 | |||
| 288 | <section id="boot-hdd"> | ||
| 289 | <title>Installing and booting Enea NFV Access Platform on the | ||
| 290 | harddisk</title> | ||
| 291 | |||
| 292 | <para>After partitioning the harddisk, boot Enea NFV Access Platform | ||
| 293 | from RAMFS or from a reference image installed on one of the | ||
| 294 | partitions.</para> | ||
| 295 | |||
| 296 | <para>To install Enea Linux image on a partition follow these | ||
| 297 | steps:</para> | ||
| 298 | |||
| 299 | <orderedlist> | ||
| 300 | <listitem> | ||
| 301 | <para>Copy your platform image on target:</para> | ||
| 302 | |||
| 303 | <programlisting>server> scp ./enea-image-virtualization-host-inteld1521.tar.gz / | ||
| 304 | root@<target_ip>:/home/root/</programlisting> | ||
| 305 | </listitem> | ||
| 306 | |||
| 307 | <listitem> | ||
| 308 | <para>Extract image onto the desired partition:</para> | ||
| 309 | |||
| 310 | <programlisting>target> mount /dev/sda3 /mnt/sda | ||
| 311 | target> tar -pzxf /home/root/enea-image-virtualization-host-inteld1521.tar.gz / | ||
| 312 | -C /mnt/sda</programlisting> | ||
| 313 | |||
| 314 | <para>Alternately, you can do both steps in one command from the | ||
| 315 | server:</para> | ||
| 316 | |||
| 317 | <programlisting>server> cat ./enea-image-virtualization-host-inteld1521.tar.gz | / | ||
| 318 | ssh root@<target_ip> "cd /mnt/sda6; tar -zxf -"</programlisting> | ||
| 319 | </listitem> | ||
| 320 | |||
| 321 | <listitem> | ||
| 322 | <para>Reboot</para> | ||
| 323 | </listitem> | ||
| 324 | |||
| 325 | <listitem> | ||
| 326 | <para>From the GRUB menu select your partition</para> | ||
| 327 | </listitem> | ||
| 328 | </orderedlist> | ||
| 329 | |||
| 330 | <note> | ||
| 331 | <para>In order to change kernel boot parameters you need to mount the | ||
| 332 | GRUB partition (i.e. <literal>/dev/sda1</literal>) and change the | ||
| 333 | <literal>EFI/boot/grub.cfg</literal> file.</para> | ||
| 334 | </note> | ||
| 335 | </section> | ||
| 126 | </section> | 336 | </section> |
| 127 | </chapter> \ No newline at end of file | 337 | </chapter> \ No newline at end of file |
diff --git a/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml b/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml index 092b52f..6242de4 100644 --- a/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml +++ b/doc/book-enea-nfv-access-platform-guide/doc/hypervisor_virtualization.xml | |||
| @@ -324,5 +324,418 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 324 | </itemizedlist> | 324 | </itemizedlist> |
| 325 | </listitem> | 325 | </listitem> |
| 326 | </itemizedlist> | 326 | </itemizedlist> |
| 327 | |||
| 328 | <section id="boot-kvm-guest"> | ||
| 329 | <title>Booting a KVM Guest</title> | ||
| 330 | |||
| 331 | <para>There are several ways to boot a KVM guest. Here we describe how | ||
| 332 | to boot using a raw image. A direct kernel boot can be performed by | ||
| 333 | transferring the guest kernel and the file system files to the host and | ||
| 334 | specifying a <literal><kernel></literal> and an | ||
| 335 | <literal><initrd></literal> element inside the | ||
| 336 | <literal><os></literal> element of the guest XML file, as in the | ||
| 337 | following example:</para> | ||
| 338 | |||
| 339 | <programlisting><os> | ||
| 340 | <kernel>bzImage</kernel> | ||
| 341 | </os> | ||
| 342 | <devices> | ||
| 343 | <disk type='file' device='disk'> | ||
| 344 | <driver name='qemu' type='raw' cache='none'/> | ||
| 345 | <source file='enea-image-virtualization-guest-qemux86-64.ext4'/> | ||
| 346 | <target dev='vda' bus='virtio'/> | ||
| 347 | </disk> | ||
| 348 | </devices></programlisting> | ||
| 349 | </section> | ||
| 350 | |||
| 351 | <section id="start-guest"> | ||
| 352 | <title>Starting a Guest</title> | ||
| 353 | |||
| 354 | <para>Command <command>virsh create</command> starts a guest:</para> | ||
| 355 | |||
| 356 | <programlisting>virsh create example-guest-x86.xml</programlisting> | ||
| 357 | |||
| 358 | <para>If further configurations are needed before the guest is reachable | ||
| 359 | through <literal>ssh</literal>, a console can be started using command | ||
| 360 | <command>virsh console</command>. The example below shows how to start a | ||
| 361 | console where kvm-example-guest is the name of the guest defined in the | ||
| 362 | guest XML file:</para> | ||
| 363 | |||
| 364 | <programlisting>virsh console kvm-example-guest</programlisting> | ||
| 365 | |||
| 366 | <para>This requires that the guest domain has a console configured in | ||
| 367 | the guest XML file:</para> | ||
| 368 | |||
| 369 | <programlisting><os> | ||
| 370 | <cmdline>console=ttyS0,115200</cmdline> | ||
| 371 | </os> | ||
| 372 | <devices> | ||
| 373 | <console type='pty'> | ||
| 374 | <target type='serial' port='0'/> | ||
| 375 | </console> | ||
| 376 | </devices></programlisting> | ||
| 377 | </section> | ||
| 378 | |||
| 379 | <section id="isolation"> | ||
| 380 | <title>Isolation</title> | ||
| 381 | |||
| 382 | <para>It may be desirable to isolate execution in a guest, to a specific | ||
| 383 | guest core. It might also be desirable to run a guest on a specific host | ||
| 384 | core.</para> | ||
| 385 | |||
| 386 | <para>To pin the virtual CPUs of the guest to specific cores, configure | ||
| 387 | the <literal><cputune></literal> contents as follows:</para> | ||
| 388 | |||
| 389 | <orderedlist> | ||
| 390 | <listitem> | ||
| 391 | <para>First explicitly state on which host core each guest core | ||
| 392 | shall run, by mapping <literal>vcpu</literal> to | ||
| 393 | <literal>cpuset</literal> in the <literal><vcpupin></literal> | ||
| 394 | tag.</para> | ||
| 395 | </listitem> | ||
| 396 | |||
| 397 | <listitem> | ||
| 398 | <para>In the <literal><cputune></literal> tag it is further | ||
| 399 | possible to specify on which CPU the emulator shall run by adding | ||
| 400 | the cpuset to the <literal><emulatorpin></literal> tag.</para> | ||
| 401 | |||
| 402 | <programlisting><vcpu placement='static'>2</vcpu> | ||
| 403 | <cputune> | ||
| 404 | <vcpupin vcpu='0' cpuset='2'/> | ||
| 405 | <vcpupin vcpu='1' cpuset='3'/> | ||
| 406 | <emulatorpin cpuset="2"/> | ||
| 407 | </cputune></programlisting> | ||
| 408 | |||
| 409 | <para><literal>libvirt</literal> will group all threads belonging to | ||
| 410 | a qemu instance into cgroups that will be created for that purpose. | ||
| 411 | It is possible to supply a base name for those cgroups using the | ||
| 412 | <literal><resource></literal> tag:</para> | ||
| 413 | |||
| 414 | <programlisting><resource> | ||
| 415 | <partition>/rt</partition> | ||
| 416 | </resource></programlisting> | ||
| 417 | </listitem> | ||
| 418 | </orderedlist> | ||
| 419 | </section> | ||
| 420 | |||
| 421 | <section id="network-libvirt"> | ||
| 422 | <title>Networking using libvirt</title> | ||
| 423 | |||
| 424 | <para>Command <command>virsh net-create</command> starts a network. If | ||
| 425 | any networks are listed in the guest XML file, those networks must be | ||
| 426 | started before the guest is started. As an example, if the network is | ||
| 427 | defined in a file named example-net.xml, it is started as | ||
| 428 | follows:</para> | ||
| 429 | |||
| 430 | <programlisting>virsh net-create example-net.xml | ||
| 431 | <network> | ||
| 432 | <name>sriov</name> | ||
| 433 | <forward mode='hostdev' managed='yes'> | ||
| 434 | <pf dev='eno3'/> | ||
| 435 | </forward> | ||
| 436 | </network></programlisting> | ||
| 437 | |||
| 438 | <para><literal>libvirt</literal> is a virtualization API that supports | ||
| 439 | virtual network creation. These networks can be connected to guests and | ||
| 440 | containers by referencing the network in the guest XML file. It is | ||
| 441 | possible to have a virtual network persistently running on the host by | ||
| 442 | starting the network with command <command>virsh net-define</command> | ||
| 443 | instead of the previously mentioned <command>virsh net-create</command>. | ||
| 444 | </para> | ||
| 445 | |||
| 446 | <para>An example for the sample network defined in | ||
| 447 | <literal>meta-vt/recipes-example/virt-example/files/example-net.xml</literal>:</para> | ||
| 448 | |||
| 449 | <programlisting>virsh net-define example-net.xml</programlisting> | ||
| 450 | |||
| 451 | <para>Command <command>virsh net-autostart</command> enables a | ||
| 452 | persistent network to start automatically when the libvirt daemon | ||
| 453 | starts:</para> | ||
| 454 | |||
| 455 | <programlisting>virsh net-autostart example-net</programlisting> | ||
| 456 | |||
| 457 | <para>Guest configuration file (xml) must be updated to access newly | ||
| 458 | created network like so:</para> | ||
| 459 | |||
| 460 | <programlisting> <interface type='network'> | ||
| 461 | <source network='sriov'/> | ||
| 462 | </interface></programlisting> | ||
| 463 | |||
| 464 | <para>The following presented here are a few modes of network access | ||
| 465 | from guest using virsh:</para> | ||
| 466 | |||
| 467 | <itemizedlist> | ||
| 468 | <listitem> | ||
| 469 | <para><emphasis role="bold">vhost-user interface</emphasis></para> | ||
| 470 | |||
| 471 | <para>See Openvswitch chapter on how to create vhost-user interface | ||
| 472 | using OpenVSwitch. Currently there is no Open vSwitch support for | ||
| 473 | networks that are managed by libvirt (e.g. NAT). As of now, only | ||
| 474 | bridged networks are supported (those where the user has to manually | ||
| 475 | create the bridge).</para> | ||
| 476 | |||
| 477 | <programlisting> <interface type='vhostuser'> | ||
| 478 | <mac address='00:00:00:00:00:01'/> | ||
| 479 | <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/> | ||
| 480 | <model type='virtio'/> | ||
| 481 | <driver queues='1'> | ||
| 482 | <host mrg_rxbuf='off'/> | ||
| 483 | </driver> | ||
| 484 | </interface></programlisting> | ||
| 485 | </listitem> | ||
| 486 | |||
| 487 | <listitem> | ||
| 488 | <para><emphasis role="bold">PCI passthrough | ||
| 489 | (SR-IOV)</emphasis></para> | ||
| 490 | |||
| 491 | <para>KVM hypervisor support for attaching PCI devices on the host | ||
| 492 | system to guests. PCI passthrough allows guests to have exclusive | ||
| 493 | access to PCI devices for a range of tasks. PCI passthrough allows | ||
| 494 | PCI devices to appear and behave as if they were physically attached | ||
| 495 | to the guest operating system.</para> | ||
| 496 | |||
| 497 | <para>Preparing an Intel system for PCI passthrough is done like | ||
| 498 | so:</para> | ||
| 499 | |||
| 500 | <itemizedlist> | ||
| 501 | <listitem> | ||
| 502 | <para>Enable the Intel VT-d extensions in BIOS</para> | ||
| 503 | </listitem> | ||
| 504 | |||
| 505 | <listitem> | ||
| 506 | <para>Activate Intel VT-d in the kernel by using | ||
| 507 | <literal>intel_iommu=on</literal> as a kernel boot | ||
| 508 | parameter</para> | ||
| 509 | </listitem> | ||
| 510 | |||
| 511 | <listitem> | ||
| 512 | <para>Allow unsafe interrupts in case the system doesn't support | ||
| 513 | interrupt remapping. This can be done using | ||
| 514 | <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as | ||
| 515 | a boot kernel parameter.</para> | ||
| 516 | </listitem> | ||
| 517 | </itemizedlist> | ||
| 518 | |||
| 519 | <para>VFs must be created on the host before starting the | ||
| 520 | guest:</para> | ||
| 521 | |||
| 522 | <programlisting>$ echo 2 > /sys/class/net/eno3/device/sriov_numvfs | ||
| 523 | $ modprobe vfio_pci | ||
| 524 | $ dpdk-devbind.py --bind=vfio-pci 0000:03:10.0 | ||
| 525 | <interface type='hostdev' managed='yes'> | ||
| 526 | <source> | ||
| 527 | <address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/> | ||
| 528 | </source> | ||
| 529 | <mac address='52:54:00:6d:90:02'/> | ||
| 530 | </interface></programlisting> | ||
| 531 | </listitem> | ||
| 532 | |||
| 533 | <listitem> | ||
| 534 | <para><emphasis role="bold">Bridge interface</emphasis></para> | ||
| 535 | |||
| 536 | <para>In case an OVS bridge exists on host, it can be used to | ||
| 537 | connect the guest:</para> | ||
| 538 | |||
| 539 | <programlisting> <interface type='bridge'> | ||
| 540 | <mac address='52:54:00:71:b1:b6'/> | ||
| 541 | <source bridge='ovsbr0'/> | ||
| 542 | <virtualport type='openvswitch'/> | ||
| 543 | <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> | ||
| 544 | </interface></programlisting> | ||
| 545 | |||
| 546 | <para>For further details on the network XML format, see <ulink | ||
| 547 | url="http://libvirt.org/formatnetwork.html">http://libvirt.org/formatnetwork.html</ulink>.</para> | ||
| 548 | </listitem> | ||
| 549 | </itemizedlist> | ||
| 550 | </section> | ||
| 551 | |||
| 552 | <section id="libvirt-guest-config-ex"> | ||
| 553 | <title>Libvirt guest configuration examples</title> | ||
| 554 | |||
| 555 | <section id="guest-config-vhost-user-interface"> | ||
| 556 | <title>Guest configuration with vhost-user interface</title> | ||
| 557 | |||
| 558 | <programlisting><domain type='kvm'> | ||
| 559 | <name>vm_vhost</name> | ||
| 560 | <uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid> | ||
| 561 | <memory unit='KiB'>4194304</memory> | ||
| 562 | <currentMemory unit='KiB'>4194304</currentMemory> | ||
| 563 | <memoryBacking> | ||
| 564 | <hugepages> | ||
| 565 | <page size='1' unit='G' nodeset='0'/> | ||
| 566 | </hugepages> | ||
| 567 | </memoryBacking> | ||
| 568 | <vcpu placement='static'>2</vcpu> | ||
| 569 | <cputune> | ||
| 570 | <shares>4096</shares> | ||
| 571 | <vcpupin vcpu='0' cpuset='4'/> | ||
| 572 | <vcpupin vcpu='1' cpuset='5'/> | ||
| 573 | <emulatorpin cpuset='4,5'/> | ||
| 574 | </cputune> | ||
| 575 | <os> | ||
| 576 | <type arch='x86_64' machine='pc'>hvm</type> | ||
| 577 | <kernel>/mnt/qemu/bzImage</kernel> | ||
| 578 | <cmdline>root=/dev/vda console=ttyS0,115200</cmdline> | ||
| 579 | <boot dev='hd'/> | ||
| 580 | </os> | ||
| 581 | <features> | ||
| 582 | <acpi/> | ||
| 583 | <apic/> | ||
| 584 | </features> | ||
| 585 | <cpu mode='host-model'> | ||
| 586 | <model fallback='allow'/> | ||
| 587 | <topology sockets='2' cores='1' threads='1'/> | ||
| 588 | <numa> | ||
| 589 | <cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/> | ||
| 590 | </numa> | ||
| 591 | </cpu> | ||
| 592 | <on_poweroff>destroy</on_poweroff> | ||
| 593 | <on_reboot>restart</on_reboot> | ||
| 594 | <on_crash>destroy</on_crash> | ||
| 595 | <devices> | ||
| 596 | <emulator>/usr/bin/qemu-system-x86_64</emulator> | ||
| 597 | <disk type='file' device='disk'> | ||
| 598 | <driver name='qemu' type='raw' cache='none'/> | ||
| 599 | <source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/> | ||
| 600 | <target dev='vda' bus='virtio'/> | ||
| 601 | </disk> | ||
| 602 | <interface type='vhostuser'> | ||
| 603 | <mac address='00:00:00:00:00:01'/> | ||
| 604 | <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/> | ||
| 605 | <model type='virtio'/> | ||
| 606 | <driver queues='1'> | ||
| 607 | <host mrg_rxbuf='off'/> | ||
| 608 | </driver> | ||
| 609 | </interface> | ||
| 610 | <serial type='pty'> | ||
| 611 | <target port='0'/> | ||
| 612 | </serial> | ||
| 613 | <console type='pty'> | ||
| 614 | <target type='serial' port='0'/> | ||
| 615 | </console> | ||
| 616 | </devices> | ||
| 617 | </domain></programlisting> | ||
| 618 | </section> | ||
| 619 | |||
| 620 | <section id="guest-config-pci-passthrough"> | ||
| 621 | <title>Guest configuration with PCI passthrough</title> | ||
| 622 | |||
| 623 | <programlisting><domain type='kvm'> | ||
| 624 | <name>vm_sriov1</name> | ||
| 625 | <uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid> | ||
| 626 | <memory unit='KiB'>4194304</memory> | ||
| 627 | <currentMemory unit='KiB'>4194304</currentMemory> | ||
| 628 | <memoryBacking> | ||
| 629 | <hugepages> | ||
| 630 | <page size='1' unit='G' nodeset='0'/> | ||
| 631 | </hugepages> | ||
| 632 | </memoryBacking> | ||
| 633 | <vcpu>2</vcpu> | ||
| 634 | <os> | ||
| 635 | <type arch='x86_64' machine='q35'>hvm</type> | ||
| 636 | <kernel>/mnt/qemu/bzImage</kernel> | ||
| 637 | <cmdline>root=/dev/vda console=ttyS0,115200</cmdline> | ||
| 638 | <boot dev='hd'/> | ||
| 639 | </os> | ||
| 640 | <features> | ||
| 641 | <acpi/> | ||
| 642 | <apic/> | ||
| 643 | </features> | ||
| 644 | <cpu mode='host-model'> | ||
| 645 | <model fallback='allow'/> | ||
| 646 | <topology sockets='1' cores='2' threads='1'/> | ||
| 647 | <numa> | ||
| 648 | <cell id='0' cpus='0' memory='4194304' unit='KiB' memAccess='shared'/> | ||
| 649 | </numa> | ||
| 650 | </cpu> | ||
| 651 | <on_poweroff>destroy</on_poweroff> | ||
| 652 | <on_reboot>restart</on_reboot> | ||
| 653 | <on_crash>destroy</on_crash> | ||
| 654 | <devices> | ||
| 655 | <emulator>/usr/bin/qemu-system-x86_64</emulator> | ||
| 656 | <disk type='file' device='disk'> | ||
| 657 | <driver name='qemu' type='raw' cache='none'/> | ||
| 658 | <source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/> | ||
| 659 | <target dev='vda' bus='virtio'/> | ||
| 660 | </disk> | ||
| 661 | <interface type='hostdev' managed='yes'> | ||
| 662 | <source> | ||
| 663 | <address type='pci' domain='0x0' bus='0x03' slot='0x10' function='0x0'/> | ||
| 664 | </source> | ||
| 665 | <mac address='52:54:00:6d:90:02'/> | ||
| 666 | </interface> | ||
| 667 | <serial type='pty'> | ||
| 668 | <target port='0'/> | ||
| 669 | </serial> | ||
| 670 | <console type='pty'> | ||
| 671 | <target type='serial' port='0'/> | ||
| 672 | </console> | ||
| 673 | </devices> | ||
| 674 | </domain></programlisting> | ||
| 675 | </section> | ||
| 676 | |||
| 677 | <section id="guest-config-bridge-interface"> | ||
| 678 | <title>Guest configuration with bridge interface</title> | ||
| 679 | |||
| 680 | <programlisting><domain type='kvm'> | ||
| 681 | <name>vm_bridge</name> | ||
| 682 | <uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid> | ||
| 683 | <memory unit='KiB'>4194304</memory> | ||
| 684 | <currentMemory unit='KiB'>4194304</currentMemory> | ||
| 685 | <memoryBacking> | ||
| 686 | <hugepages> | ||
| 687 | <page size='1' unit='G' nodeset='0'/> | ||
| 688 | </hugepages> | ||
| 689 | </memoryBacking> | ||
| 690 | <vcpu placement='static'>2</vcpu> | ||
| 691 | <cputune> | ||
| 692 | <shares>4096</shares> | ||
| 693 | <vcpupin vcpu='0' cpuset='4'/> | ||
| 694 | <vcpupin vcpu='1' cpuset='5'/> | ||
| 695 | <emulatorpin cpuset='4,5'/> | ||
| 696 | </cputune> | ||
| 697 | <os> | ||
| 698 | <type arch='x86_64' machine='q35'>hvm</type> | ||
| 699 | <kernel>/mnt/qemu/bzImage</kernel> | ||
| 700 | <cmdline>root=/dev/vda console=ttyS0,115200</cmdline> | ||
| 701 | <boot dev='hd'/> | ||
| 702 | </os> | ||
| 703 | <features> | ||
| 704 | <acpi/> | ||
| 705 | <apic/> | ||
| 706 | </features> | ||
| 707 | <cpu mode='host-model'> | ||
| 708 | <model fallback='allow'/> | ||
| 709 | <topology sockets='2' cores='1' threads='1'/> | ||
| 710 | <numa> | ||
| 711 | <cell id='0' cpus='0-1' memory='4194304' unit='KiB' memAccess='shared'/> | ||
| 712 | </numa> | ||
| 713 | </cpu> | ||
| 714 | <on_poweroff>destroy</on_poweroff> | ||
| 715 | <on_reboot>restart</on_reboot> | ||
| 716 | <on_crash>destroy</on_crash> | ||
| 717 | <devices> | ||
| 718 | <emulator>/usr/bin/qemu-system-x86_64</emulator> | ||
| 719 | <disk type='file' device='disk'> | ||
| 720 | <driver name='qemu' type='raw' cache='none'/> | ||
| 721 | <source file='/mnt/qemu/enea-image-virtualization-guest-qemux86-64.ext4'/> | ||
| 722 | <target dev='vda' bus='virtio'/> | ||
| 723 | </disk> | ||
| 724 | <interface type='bridge'> | ||
| 725 | <mac address='52:54:00:71:b1:b6'/> | ||
| 726 | <source bridge='ovsbr0'/> | ||
| 727 | <virtualport type='openvswitch'/> | ||
| 728 | <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> | ||
| 729 | </interface> | ||
| 730 | <serial type='pty'> | ||
| 731 | <target port='0'/> | ||
| 732 | </serial> | ||
| 733 | <console type='pty'> | ||
| 734 | <target type='serial' port='0'/> | ||
| 735 | </console> | ||
| 736 | </devices> | ||
| 737 | </domain></programlisting> | ||
| 738 | </section> | ||
| 739 | </section> | ||
| 327 | </section> | 740 | </section> |
| 328 | </chapter> \ No newline at end of file | 741 | </chapter> \ No newline at end of file |
