diff options
| author | Miruna Paun <Miruna.Paun@enea.com> | 2017-12-19 18:53:06 +0100 |
|---|---|---|
| committer | Miruna Paun <Miruna.Paun@enea.com> | 2017-12-19 18:53:06 +0100 |
| commit | 7a534b1b68bea3ce41355b736d7df778fd873d80 (patch) | |
| tree | 8c472c336dac434ed28652d458f7aba8a97ca4a5 | |
| parent | 7d22f83f0b3af1a5a93cd7d1775601297c96e89f (diff) | |
| download | nfv-access-documentation-7a534b1b68bea3ce41355b736d7df778fd873d80.tar.gz | |
LXCR-8047 finished corrections to the latest version of the arm guide
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/benchmarks.xml | 112 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/dpdk.xml | 15 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/getting_started.xml | 150 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml | 223 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/overview.xml | 88 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/ovs.xml | 15 | ||||
| -rw-r--r-- | doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml | 22 |
7 files changed, 338 insertions, 287 deletions
diff --git a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml index def0f89..c9f042e 100644 --- a/doc/book-enea-nfv-access-guide/doc/benchmarks.xml +++ b/doc/book-enea-nfv-access-guide/doc/benchmarks.xml | |||
| @@ -14,7 +14,7 @@ | |||
| 14 | <title>Hardware Setup</title> | 14 | <title>Hardware Setup</title> |
| 15 | 15 | ||
| 16 | <tgroup cols="2"> | 16 | <tgroup cols="2"> |
| 17 | <colspec align="left"/> | 17 | <colspec align="left" /> |
| 18 | 18 | ||
| 19 | <thead> | 19 | <thead> |
| 20 | <row> | 20 | <row> |
| @@ -124,7 +124,7 @@ rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ | |||
| 124 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | 124 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' |
| 125 | run boot_board</programlisting></para> | 125 | run boot_board</programlisting></para> |
| 126 | 126 | ||
| 127 | <para>Configure hugepages and set up DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages | 127 | <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages |
| 128 | modprobe vfio-pci | 128 | modprobe vfio-pci |
| 129 | ifconfig enP1p1s0f1 down | 129 | ifconfig enP1p1s0f1 down |
| 130 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run | 130 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run |
| @@ -148,9 +148,9 @@ nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | |||
| 148 | run boot_board</programlisting> | 148 | run boot_board</programlisting> |
| 149 | 149 | ||
| 150 | <para>It is expected that a NFV Access guest image is present on the | 150 | <para>It is expected that a NFV Access guest image is present on the |
| 151 | target. </para> | 151 | target.</para> |
| 152 | 152 | ||
| 153 | <para>Set up DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config | 153 | <para>Set up the DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config |
| 154 | killall ovsdb-server ovs-vswitchd | 154 | killall ovsdb-server ovs-vswitchd |
| 155 | rm -rf /etc/openvswitch/* | 155 | rm -rf /etc/openvswitch/* |
| 156 | rm -rf /var/run/openvswitch/* | 156 | rm -rf /var/run/openvswitch/* |
| @@ -439,14 +439,14 @@ start</programlisting><table> | |||
| 439 | 439 | ||
| 440 | <para>Start by following the steps below:</para> | 440 | <para>Start by following the steps below:</para> |
| 441 | 441 | ||
| 442 | <para>Boot the board using the following U-Boot commands: </para> | 442 | <para>Boot the board using the following U-Boot commands:</para> |
| 443 | 443 | ||
| 444 | <programlisting>setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ | 444 | <programlisting>setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ |
| 445 | rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ | 445 | rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ |
| 446 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | 446 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' |
| 447 | run boot_board</programlisting> | 447 | run boot_board</programlisting> |
| 448 | 448 | ||
| 449 | <para>Configure hugepages and set up DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages | 449 | <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages |
| 450 | modprobe vfio-pci | 450 | modprobe vfio-pci |
| 451 | ifconfig enP1p1s0f1 down | 451 | ifconfig enP1p1s0f1 down |
| 452 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run | 452 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run |
| @@ -469,7 +469,7 @@ rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ | |||
| 469 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | 469 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' |
| 470 | run boot_board</programlisting> | 470 | run boot_board</programlisting> |
| 471 | 471 | ||
| 472 | <para>Set up DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config | 472 | <para>Set up the DPDK and configure the OVS bridge:<programlisting># Clean up old OVS old config |
| 473 | killall ovsdb-server ovs-vswitchd | 473 | killall ovsdb-server ovs-vswitchd |
| 474 | rm -rf /etc/openvswitch/* | 474 | rm -rf /etc/openvswitch/* |
| 475 | rm -rf /var/run/openvswitch/* | 475 | rm -rf /var/run/openvswitch/* |
| @@ -520,8 +520,8 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Import a | |||
| 520 | new console to the host and start the second Docker | 520 | new console to the host and start the second Docker |
| 521 | instance:<programlisting>docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \ | 521 | instance:<programlisting>docker run -v /var/run/openvswitch/:/var/run/openvswitch/ \ |
| 522 | -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bash</programlisting>In | 522 | -v /dev/hugepages/:/dev/hugepages/ nfv_container /bin/bash</programlisting>In |
| 523 | the second container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci / | 523 | the second container start testpmd:<programlisting>testpmd -c 0x0F --file-prefix prog2 --socket-mem 512 --no-pci \ |
| 524 | --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 / | 524 | --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \ |
| 525 | -d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlan</programlisting>Start | 525 | -d /usr/lib/librte_pmd_virtio.so.1.1 -- -i --disable-hw-vlan</programlisting>Start |
| 526 | testpmd in the second Docker container:<programlisting>testpmd -c 0x30 -n 4 --file-prefix prog1 --socket-mem 512 \ | 526 | testpmd in the second Docker container:<programlisting>testpmd -c 0x30 -n 4 --file-prefix prog1 --socket-mem 512 \ |
| 527 | --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \ | 527 | --no-pci --vdev=virtio_user0,path=/var/run/openvswitch/vhost-user2 \ |
| @@ -693,7 +693,7 @@ rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ | |||
| 693 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | 693 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' |
| 694 | run boot_board</programlisting></para> | 694 | run boot_board</programlisting></para> |
| 695 | 695 | ||
| 696 | <para>Configure hugepages and set up DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages | 696 | <para>Configure hugepages and set up the DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages |
| 697 | modprobe vfio-pci | 697 | modprobe vfio-pci |
| 698 | ifconfig enP1p1s0f1 down | 698 | ifconfig enP1p1s0f1 down |
| 699 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run | 699 | dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run |
| @@ -715,7 +715,7 @@ nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | |||
| 715 | run boot_board</programlisting>Kill unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd | 715 | run boot_board</programlisting>Kill unnecessary services: <programlisting>killall ovsdb-server ovs-vswitchd |
| 716 | rm -rf /etc/openvswitch/* | 716 | rm -rf /etc/openvswitch/* |
| 717 | rm -rf /var/run/openvswitch/* | 717 | rm -rf /var/run/openvswitch/* |
| 718 | mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up | 718 | mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up the |
| 719 | DPDK:<programlisting>echo 20 > /proc/sys/vm/nr_hugepages | 719 | DPDK:<programlisting>echo 20 > /proc/sys/vm/nr_hugepages |
| 720 | modprobe vfio-pci | 720 | modprobe vfio-pci |
| 721 | dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure | 721 | dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure |
| @@ -742,7 +742,7 @@ ovs-ofctl del-flows ovsbr0 | |||
| 742 | ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 | 742 | ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2 |
| 743 | ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Create an | 743 | ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Create an |
| 744 | XML file with the content below (e.g. | 744 | XML file with the content below (e.g. |
| 745 | /home/root/guest.xml):<programlisting><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> | 745 | <filename>/home/root/guest.xml</filename>):<programlisting><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> |
| 746 | <name>nfv-ovs-vm</name> | 746 | <name>nfv-ovs-vm</name> |
| 747 | <uuid>ed204646-1ad5-11e7-93ae-92361f002671</uuid> | 747 | <uuid>ed204646-1ad5-11e7-93ae-92361f002671</uuid> |
| 748 | <memory unit='KiB'>4194304</memory> | 748 | <memory unit='KiB'>4194304</memory> |
| @@ -757,7 +757,9 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Create an | |||
| 757 | <os> | 757 | <os> |
| 758 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> | 758 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> |
| 759 | <kernel>Image</kernel> | 759 | <kernel>Image</kernel> |
| 760 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> | 760 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \ |
| 761 | debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \ | ||
| 762 | irqaffinity=0</cmdline> | ||
| 761 | <boot dev='hd'/> | 763 | <boot dev='hd'/> |
| 762 | </os> | 764 | </os> |
| 763 | 765 | ||
| @@ -810,19 +812,20 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1</programlisting>Create an | |||
| 810 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> | 812 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> |
| 811 | 813 | ||
| 812 | <qemu:arg value='-device'/> | 814 | <qemu:arg value='-device'/> |
| 813 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> | 815 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01, |
| 816 | bus=pcie.0,addr=0x2'/> | ||
| 814 | </qemu:commandline> | 817 | </qemu:commandline> |
| 815 | </domain></programlisting></para> | 818 | </domain></programlisting></para> |
| 816 | 819 | ||
| 817 | <para>Start the virtual machine, by running:</para> | 820 | <para>Start the virtual machine by running:</para> |
| 818 | 821 | ||
| 819 | <para><programlisting>virsh create /home/root/guest.xml</programlisting></para> | 822 | <para><programlisting>virsh create /home/root/guest.xml</programlisting></para> |
| 820 | 823 | ||
| 821 | <para>Connect to the virtual machines console:</para> | 824 | <para>Connect to the virtual machine console:</para> |
| 822 | 825 | ||
| 823 | <para><programlisting>virsh console nfv-ovs-vm</programlisting></para> | 826 | <para><programlisting>virsh console nfv-ovs-vm</programlisting></para> |
| 824 | 827 | ||
| 825 | <para>Inside the VM, configure DPDK: <programlisting>ifconfig enp0s2 down | 828 | <para>Inside the VM, configure the DPDK: <programlisting>ifconfig enp0s2 down |
| 826 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode | 829 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode |
| 827 | modprobe vfio-pci | 830 | modprobe vfio-pci |
| 828 | dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Inside the VM, start | 831 | dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Inside the VM, start |
| @@ -1090,7 +1093,7 @@ show port stats 0</programlisting><table> | |||
| 1090 | <programlisting>setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ | 1093 | <programlisting>setenv boot_board 'setenv userbootparams nohz_full=1-23 isolcpus=1-23 \ |
| 1091 | rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ | 1094 | rcu-nocbs=1-23 rcu_nocb_poll clocksource=tsc tsc=reliable nohpet \ |
| 1092 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' | 1095 | nosoftlockup audit=0 nmi_watchdog=0; setenv satapart 2; run bootsata' |
| 1093 | run boot_board</programlisting>Configure hugepages and set up | 1096 | run boot_board</programlisting>Configure hugepages and set up the |
| 1094 | DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages | 1097 | DPDK:<programlisting>echo 4 > /proc/sys/vm/nr_hugepages |
| 1095 | modprobe vfio-pci | 1098 | modprobe vfio-pci |
| 1096 | ifconfig enP1p1s0f1 down | 1099 | ifconfig enP1p1s0f1 down |
| @@ -1107,28 +1110,29 @@ dpdk-devbind -b vfio-pci 0001:01:00.1</programlisting>Run | |||
| 1107 | <para>Start by doing the following:</para> | 1110 | <para>Start by doing the following:</para> |
| 1108 | 1111 | ||
| 1109 | <para>SSD boot using the following <literal>grub.cfg</literal> | 1112 | <para>SSD boot using the following <literal>grub.cfg</literal> |
| 1110 | entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 / | 1113 | entry: <programlisting>linux (hd0,gpt3)/boot/bzImage root=/dev/sda3 ip=dhcp nohz_full=1-7 \ |
| 1111 | isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable / | 1114 | isolcpus=1-7 rcu-nocbs=1-7 rcu_nocb_poll intel_pstate=disable \ |
| 1112 | clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 / | 1115 | clocksource=tsc tsc=reliable nohpet nosoftlockup intel_idle.max_cstate=0 \ |
| 1113 | processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt / | 1116 | processor.max_cstate=0 mce=ignore_ce audit=0 nmi_watchdog=0 iommu=pt \ |
| 1114 | intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB / | 1117 | intel_iommu=on hugepagesz=1GB hugepages=8 default_hugepagesz=1GB \ |
| 1115 | hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill | 1118 | hugepagesz=2M hugepages=2048 vfio_iommu_type1.allow_unsafe_interrupts=1</programlisting>Kill |
| 1116 | Services:<programlisting>killall ovsdb-server ovs-vswitchd | 1119 | Services:<programlisting>killall ovsdb-server ovs-vswitchd |
| 1117 | rm -rf /etc/openvswitch/* | 1120 | rm -rf /etc/openvswitch/* |
| 1118 | mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up | 1121 | mkdir -p /var/run/openvswitch</programlisting>Configure hugepages, set up the |
| 1119 | DPDK:<programlisting>echo 20 > /proc/sys/vm/nr_hugepages | 1122 | DPDK:<programlisting>echo 20 > /proc/sys/vm/nr_hugepages |
| 1120 | modprobe vfio-pci | 1123 | modprobe vfio-pci |
| 1121 | dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure | 1124 | dpdk-devbind --bind=vfio-pci 0001:01:00.1</programlisting>Configure the |
| 1122 | OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock | 1125 | OVS:<programlisting>export DB_SOCK=/var/run/openvswitch/db.sock |
| 1123 | ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema | 1126 | ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema |
| 1124 | ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options \ | 1127 | ovsdb-server --remote=punix:$DB_SOCK \ |
| 1125 | --pidfile --detach | 1128 | --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach |
| 1126 | ovs-vsctl --no-wait init | 1129 | ovs-vsctl --no-wait init |
| 1127 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 | 1130 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x10 |
| 1128 | ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc | 1131 | ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc |
| 1129 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 | 1132 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 |
| 1130 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true | 1133 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true |
| 1131 | ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log | 1134 | ovs-vswitchd unix:$DB_SOCK --pidfile --detach \ |
| 1135 | --log-file=/var/log/openvswitch/ovs-vswitchd.log | ||
| 1132 | 1136 | ||
| 1133 | ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev | 1137 | ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev |
| 1134 | ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ | 1138 | ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ |
| @@ -1157,7 +1161,9 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Create an | |||
| 1157 | <os> | 1161 | <os> |
| 1158 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> | 1162 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> |
| 1159 | <kernel>Image</kernel> | 1163 | <kernel>Image</kernel> |
| 1160 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> | 1164 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \ |
| 1165 | debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \ | ||
| 1166 | irqaffinity=0</cmdline> | ||
| 1161 | <boot dev='hd'/> | 1167 | <boot dev='hd'/> |
| 1162 | </os> | 1168 | </os> |
| 1163 | 1169 | ||
| @@ -1210,21 +1216,20 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Create an | |||
| 1210 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> | 1216 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> |
| 1211 | 1217 | ||
| 1212 | <qemu:arg value='-device'/> | 1218 | <qemu:arg value='-device'/> |
| 1213 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> | 1219 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,/ |
| 1220 | bus=pcie.0,addr=0x2'/> | ||
| 1214 | </qemu:commandline> | 1221 | </qemu:commandline> |
| 1215 | </domain> | 1222 | </domain></programlisting></para> |
| 1216 | </programlisting></para> | ||
| 1217 | 1223 | ||
| 1218 | <para>Connect to the first virtual machines console, by | 1224 | <para>The first virtual machine shall be called VM1. Connect to the |
| 1219 | running:</para> | 1225 | first virtual machine console, by running:</para> |
| 1220 | 1226 | ||
| 1221 | <para><programlisting>virsh console nfv-ovs-vm1</programlisting></para> | 1227 | <para><programlisting>virsh console nfv-ovs-vm1</programlisting></para> |
| 1222 | 1228 | ||
| 1223 | <para>The first virtual machine shall be called VM1.</para> | 1229 | <para>Connect to Target 2 through a new <literal>SSH</literal> |
| 1224 | 1230 | session, and launch a second VM by creating another XML file and | |
| 1225 | <para>Connect to Target 2 through a new SSH session and run launch a | 1231 | running <command>virsh create |
| 1226 | second VM by creating another XML file and running <command>virsh | 1232 | <XML_FILE2>:</command><programlisting><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> |
| 1227 | create <XML_FILE2></command><programlisting><domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> | ||
| 1228 | <name>nfv-ovs-vm2</name> | 1233 | <name>nfv-ovs-vm2</name> |
| 1229 | <uuid>ed204646-1ad5-11e7-93ae-92361f002623</uuid> | 1234 | <uuid>ed204646-1ad5-11e7-93ae-92361f002623</uuid> |
| 1230 | <memory unit='KiB'>4194304</memory> | 1235 | <memory unit='KiB'>4194304</memory> |
| @@ -1239,7 +1244,9 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Create an | |||
| 1239 | <os> | 1244 | <os> |
| 1240 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> | 1245 | <type arch='aarch64' machine='virt,gic_version=3'>hvm</type> |
| 1241 | <kernel>Image</kernel> | 1246 | <kernel>Image</kernel> |
| 1242 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 irqaffinity=0</cmdline> | 1247 | <cmdline>root=/dev/vda console=ttyAMA0,115200n8 maxcpus=24 coherent_pool=16M \ |
| 1248 | debug hugepagesz=512M hugepages=3 audit=0 isolcpus=1 nohz_full=1 rcu_nocbs=1 \ | ||
| 1249 | irqaffinity=0</cmdline> | ||
| 1243 | <boot dev='hd'/> | 1250 | <boot dev='hd'/> |
| 1244 | </os> | 1251 | </os> |
| 1245 | 1252 | ||
| @@ -1292,26 +1299,24 @@ ovs-ofctl add-flow ovsbr0 in_port=2,action=output:3</programlisting>Create an | |||
| 1292 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet1'/> | 1299 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet1'/> |
| 1293 | 1300 | ||
| 1294 | <qemu:arg value='-device'/> | 1301 | <qemu:arg value='-device'/> |
| 1295 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:02,bus=pcie.0,addr=0x2'/> | 1302 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:02,/ |
| 1303 | bus=pcie.0,addr=0x2'/> | ||
| 1296 | </qemu:commandline> | 1304 | </qemu:commandline> |
| 1297 | </domain> | 1305 | </domain></programlisting></para> |
| 1298 | </programlisting></para> | ||
| 1299 | 1306 | ||
| 1300 | <para>Connect to the second virtual machines console, by | 1307 | <para>The second virtual machine shall be called VM2. Connect to the |
| 1301 | running:</para> | 1308 | second virtual machine console, by running:</para> |
| 1302 | 1309 | ||
| 1303 | <para><programlisting>virsh console nfv-ovs-vm2</programlisting></para> | 1310 | <para><programlisting>virsh console nfv-ovs-vm2</programlisting></para> |
| 1304 | 1311 | ||
| 1305 | <para>The second virtual machine shall be called VM2.</para> | 1312 | <para>Configure the DPDK inside VM1:<programlisting>ifconfig enp0s2 down |
| 1306 | |||
| 1307 | <para>Configure DPDK inside VM1:<programlisting>ifconfig enp0s2 down | ||
| 1308 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode | 1313 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode |
| 1309 | modprobe vfio-pci | 1314 | modprobe vfio-pci |
| 1310 | dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Run testpmd inside | 1315 | dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Run testpmd inside |
| 1311 | VM1:<programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \ | 1316 | VM1:<programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_virtio.so.1.1 \ |
| 1312 | -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \ | 1317 | -w 0000:00:02.0 -- -i --disable-hw-vlan-filter \ |
| 1313 | --no-flush-rx --port-topology=chained</programlisting>Start testpmd inside | 1318 | --no-flush-rx --port-topology=chained</programlisting>Start testpmd inside |
| 1314 | VM1:<programlisting>start</programlisting>Configure DPDK inside | 1319 | VM1:<programlisting>start</programlisting>Configure the DPDK inside |
| 1315 | VM2:<programlisting>ifconfig enp0s2 down | 1320 | VM2:<programlisting>ifconfig enp0s2 down |
| 1316 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode | 1321 | echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode |
| 1317 | modprobe vfio-pci | 1322 | modprobe vfio-pci |
| @@ -1321,8 +1326,8 @@ dpdk-devbind -b vfio-pci 0000:00:02.0</programlisting>Run testpmd inside | |||
| 1321 | --no-flush-rx --port-topology=chained</programlisting>Set VM2 for | 1326 | --no-flush-rx --port-topology=chained</programlisting>Set VM2 for |
| 1322 | termination and start testpmd:<programlisting>set fwd rxonly | 1327 | termination and start testpmd:<programlisting>set fwd rxonly |
| 1323 | start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use | 1328 | start</programlisting>On target 1, start pktgen traffic:<programlisting>start 0</programlisting>Use |
| 1324 | this command to refresh testpmd display in VM1 and VM2 and note the | 1329 | this command to refresh the testpmd display in VM1 and VM2 and note |
| 1325 | highest values:<programlisting>show port stats 0</programlisting>To | 1330 | the highest values:<programlisting>show port stats 0</programlisting>To |
| 1326 | stop traffic from pktgen, in order to choose a different frame | 1331 | stop traffic from pktgen, in order to choose a different frame |
| 1327 | size:<programlisting>stop 0</programlisting>To clear numbers in | 1332 | size:<programlisting>stop 0</programlisting>To clear numbers in |
| 1328 | testpmd:<programlisting>clear port stats | 1333 | testpmd:<programlisting>clear port stats |
| @@ -1337,7 +1342,8 @@ show port stats 0</programlisting>For VM1, we record the stats relevant for | |||
| 1337 | 1342 | ||
| 1338 | <para>Only Rx-pps and Tx-pps numbers are important here, they change | 1343 | <para>Only Rx-pps and Tx-pps numbers are important here, they change |
| 1339 | every time stats are displayed as long as there is traffic. Run the | 1344 | every time stats are displayed as long as there is traffic. Run the |
| 1340 | command a few times and pick the best (maximum) values seen.</para> | 1345 | command a few times and pick the best (maximum) values |
| 1346 | observed.</para> | ||
| 1341 | 1347 | ||
| 1342 | <para>For VM2, we record the stats relevant for <emphasis | 1348 | <para>For VM2, we record the stats relevant for <emphasis |
| 1343 | role="bold">termination</emphasis>:</para> | 1349 | role="bold">termination</emphasis>:</para> |
| @@ -1473,4 +1479,4 @@ show port stats 0</programlisting>For VM1, we record the stats relevant for | |||
| 1473 | </section> | 1479 | </section> |
| 1474 | </section> | 1480 | </section> |
| 1475 | </section> | 1481 | </section> |
| 1476 | </chapter> | 1482 | </chapter> \ No newline at end of file |
diff --git a/doc/book-enea-nfv-access-guide/doc/dpdk.xml b/doc/book-enea-nfv-access-guide/doc/dpdk.xml index 736c8f1..bc3f479 100644 --- a/doc/book-enea-nfv-access-guide/doc/dpdk.xml +++ b/doc/book-enea-nfv-access-guide/doc/dpdk.xml | |||
| @@ -67,6 +67,10 @@ mount -t hugetlbfs nodev /mnt/huge</programlisting> | |||
| 67 | url="http://dpdk.org/doc/guides-17.08/tools/devbind.html">http://dpdk.org/doc/guides-17.08/tools/devbind.html</ulink> | 67 | url="http://dpdk.org/doc/guides-17.08/tools/devbind.html">http://dpdk.org/doc/guides-17.08/tools/devbind.html</ulink> |
| 68 | for more information.</para> | 68 | for more information.</para> |
| 69 | </listitem> | 69 | </listitem> |
| 70 | |||
| 71 | <listitem> | ||
| 72 | <para>VFIO-NOMMU needs to be set if run on VM: <programlisting>echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode</programlisting></para> | ||
| 73 | </listitem> | ||
| 70 | </orderedlist> | 74 | </orderedlist> |
| 71 | 75 | ||
| 72 | <para>To print the current status of all known network | 76 | <para>To print the current status of all known network |
| @@ -88,14 +92,15 @@ mount -t hugetlbfs nodev /mnt/huge</programlisting> | |||
| 88 | <orderedlist> | 92 | <orderedlist> |
| 89 | <listitem> | 93 | <listitem> |
| 90 | <para>Setup DPDK on both boards, following the instructions in <xref | 94 | <para>Setup DPDK on both boards, following the instructions in <xref |
| 91 | linkend="dpdk-setup"/>.</para> | 95 | linkend="dpdk-setup" />.</para> |
| 92 | </listitem> | 96 | </listitem> |
| 93 | 97 | ||
| 94 | <listitem> | 98 | <listitem> |
| 95 | <para>On board 1, start the Pktgen application:</para> | 99 | <para>On board 1, start the Pktgen application:</para> |
| 96 | 100 | ||
| 97 | <programlisting>cd /usr/share/apps/pktgen/ | 101 | <programlisting>cd /usr/share/apps/pktgen/ |
| 98 | ./pktgen -v -c 0x7 -n 4 --proc-type auto -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 -w <PCI device number> -- -P -m "[1:2].0"</programlisting> | 102 | ./pktgen -v -c 0x7 -n 4 --proc-type auto -d \ |
| 103 | /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 -w <PCI device number> -- -P -m "[1:2].0"</programlisting> | ||
| 99 | 104 | ||
| 100 | <para>In the Pktgen console, run:</para> | 105 | <para>In the Pktgen console, run:</para> |
| 101 | 106 | ||
| @@ -108,8 +113,8 @@ mount -t hugetlbfs nodev /mnt/huge</programlisting> | |||
| 108 | <listitem> | 113 | <listitem> |
| 109 | <para>On board 2, start the testpmd application:</para> | 114 | <para>On board 2, start the testpmd application:</para> |
| 110 | 115 | ||
| 111 | <programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 -w <PCI device number> -- -i --disable-hw-vlan-filter --no-flush-rx --port-topology=chained | 116 | <programlisting>testpmd -v -c 0x3 -n 4 -d /usr/lib/librte_pmd_thunderx_nicvf.so.1.1 -w <PCI device \ |
| 112 | </programlisting> | 117 | number> -- -i --disable-hw-vlan-filter --no-flush-rx --port-topology=chained</programlisting> |
| 113 | 118 | ||
| 114 | <para>For more information, refer to the testpmd application user | 119 | <para>For more information, refer to the testpmd application user |
| 115 | guide: <ulink | 120 | guide: <ulink |
| @@ -117,4 +122,4 @@ mount -t hugetlbfs nodev /mnt/huge</programlisting> | |||
| 117 | </listitem> | 122 | </listitem> |
| 118 | </orderedlist> | 123 | </orderedlist> |
| 119 | </section> | 124 | </section> |
| 120 | </chapter> | 125 | </chapter> \ No newline at end of file |
diff --git a/doc/book-enea-nfv-access-guide/doc/getting_started.xml b/doc/book-enea-nfv-access-guide/doc/getting_started.xml index cb1de6d..74cad71 100644 --- a/doc/book-enea-nfv-access-guide/doc/getting_started.xml +++ b/doc/book-enea-nfv-access-guide/doc/getting_started.xml | |||
| @@ -1,4 +1,4 @@ | |||
| 1 | <?xml version="1.0" encoding="UTF-8"?> | 1 | <?xml version="1.0" encoding="ISO-8859-1"?> |
| 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" | 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" |
| 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> | 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> |
| 4 | <chapter id="plat-release-content"> | 4 | <chapter id="plat-release-content"> |
| @@ -96,71 +96,96 @@ | |||
| 96 | /* GRUB EFI file */</programlisting> | 96 | /* GRUB EFI file */</programlisting> |
| 97 | </section> | 97 | </section> |
| 98 | 98 | ||
| 99 | <section condition="" id="prebuilt-artifacts"> | 99 | <section id="prebuilt-artifacts"> |
| 100 | <title>How to use the Prebuilt Artifacts</title> | 100 | <title>How to use the Prebuilt Artifacts</title> |
| 101 | 101 | ||
| 102 | <section id="sysshell_config"> | 102 | <section id="sysshell_config"> |
| 103 | <title>Booting NFV Access to RAM</title> | 103 | <title>Booting NFV Access to RAM</title> |
| 104 | 104 | ||
| 105 | <para>NFV Access can be booted on target using the RAMDISK images. | 105 | <para>Enea NFV Access can be booted on a target using the RAMDISK |
| 106 | Following is described how to set the environment, configure the | 106 | images. How to set the environment, configure the bootloader (U-Boot), |
| 107 | bootloader(U-Boot), load and boot the NFV Access on target. Please check | 107 | load and boot on the target, is detailed below. Please check the <link |
| 108 | the <link linkend="boot_prereq">Prerequisites</link> subchapter before | 108 | linkend="boot_prereq">Prerequisites</link> section before starting the |
| 109 | starting boot process.</para> | 109 | boot process.</para> |
| 110 | 110 | ||
| 111 | <para>Connect to target over serial and stop default boot process in | 111 | <para><emphasis role="bold">Setting up the environment and booting on |
| 112 | U-Boot command line interface. Set U-Boot network configuration for the | 112 | the target</emphasis></para> |
| 113 | ethernet port which connects the target to the network. If target is not | 113 | |
| 114 | connected on a specific network, Enea provides DHCP and TFTP servers. | 114 | <itemizedlist> |
| 115 | Please see <link linkend="boot_docker">Docker Installer</link> | 115 | <listitem> |
| 116 | subchapter about how to install them on a development host.</para> | 116 | <para>Connect to the target over serial and stop the default boot |
| 117 | 117 | process in the U-Boot command line interface.</para> | |
| 118 | <programlisting>> setenv ethact <vnic0/vnic1> | 118 | </listitem> |
| 119 | |||
| 120 | <listitem> | ||
| 121 | <para>Set up the U-Boot network configuration for the ethernet port, | ||
| 122 | which connects the target to the network.</para> | ||
| 123 | </listitem> | ||
| 124 | |||
| 125 | <listitem> | ||
| 126 | <para>If the target is not connected on a specific network, Enea | ||
| 127 | provides DHCP and TFTP servers. Please see <link | ||
| 128 | linkend="boot_docker">Docker Installer</link> section, on how to | ||
| 129 | install these two servers on a development host.</para> | ||
| 130 | |||
| 131 | <programlisting>> setenv ethact <vnic0/vnic1> | ||
| 119 | > setenv gatewayip <GatewayIP> | 132 | > setenv gatewayip <GatewayIP> |
| 120 | > setenv serverip <TFTPserverIP> | 133 | > setenv serverip <TFTPserverIP> |
| 121 | > setenv netmask <netmask> | 134 | > setenv netmask <netmask> |
| 122 | > setenv ipaddr <target IP></programlisting> | 135 | > setenv ipaddr <target IP></programlisting> |
| 136 | </listitem> | ||
| 123 | 137 | ||
| 124 | <para>Boot NFV Access images:</para> | 138 | <listitem> |
| 139 | <para>Boot the Enea NFV Access images:</para> | ||
| 125 | 140 | ||
| 126 | <programlisting>> tftpboot $kernel_addr Image | 141 | <programlisting>> tftpboot $kernel_addr Image |
| 127 | > setenv rootfs_addr 0x60000000 | 142 | > setenv rootfs_addr 0x60000000 |
| 128 | > tftpboot $rootfs_addr enea-nfv-access-cn8304.ext4.gz.u-boot | 143 | > tftpboot $rootfs_addr enea-nfv-access-cn8304.ext4.gz.u-boot |
| 129 | > booti $kernel_addr $rootfs_addr $fdtcontroladdr | 144 | > booti $kernel_addr $rootfs_addr $fdtcontroladdr |
| 130 | > setenv bootargs root=/dev/ram0 rw ramdisk_size=1000000 console=ttyAMA0,115200n8 \ | 145 | > setenv bootargs root=/dev/ram0 rw ramdisk_size=1000000 console=ttyAMA0,115200n8 \ |
| 131 | earlycon=pl011,0x87e028000000 coherent_pool=16M</programlisting> | 146 | earlycon=pl011,0x87e028000000 coherent_pool=16M</programlisting> |
| 147 | </listitem> | ||
| 148 | </itemizedlist> | ||
| 132 | 149 | ||
| 133 | <section id="boot_prereq"> | 150 | <section id="boot_prereq"> |
| 134 | <title>Prerequisites:</title> | 151 | <title>Prerequisites</title> |
| 152 | |||
| 153 | <para>The following requirements are needed in order to successfully | ||
| 154 | boot Enea NFV Access to RAM:</para> | ||
| 135 | 155 | ||
| 136 | <itemizedlist> | 156 | <itemizedlist> |
| 137 | <listitem> | 157 | <listitem> |
| 138 | <para>NFV Acccess images - see NFV Access release content</para> | 158 | <para>Enea NFV Acccess images - see the <xi:include |
| 159 | href="../../s_docbuild/olinkdb/pardoc-common.xml" | ||
| 160 | xmlns:xi="http://www.w3.org/2001/XInclude" | ||
| 161 | xpointer="element(book_enea_nfv_access_release_info/1)" />, under | ||
| 162 | section <emphasis role="bold">Release Content</emphasis>, for | ||
| 163 | details on the images provided.</para> | ||
| 139 | </listitem> | 164 | </listitem> |
| 140 | </itemizedlist> | 165 | </itemizedlist> |
| 141 | 166 | ||
| 142 | <itemizedlist> | 167 | <itemizedlist> |
| 143 | <listitem> | 168 | <listitem> |
| 144 | <para>DHCP server - If the board is not connected into a specific | 169 | <para>DHCP server - If the board is not connected to a specific |
| 145 | network Enea provides a docker image with DHCP server. Please see | 170 | network, Enea provides a Docker image with a DHCP server. Please |
| 146 | <link linkend="boot_docker">Docker Installer</link> | 171 | see the <link linkend="boot_docker">Docker Installer</link> |
| 147 | subchapter.</para> | 172 | section for further details.</para> |
| 148 | </listitem> | 173 | </listitem> |
| 149 | </itemizedlist> | 174 | </itemizedlist> |
| 150 | 175 | ||
| 151 | <itemizedlist> | 176 | <itemizedlist> |
| 152 | <listitem> | 177 | <listitem> |
| 153 | <para>TFTP server - If the board is not connected into a specific | 178 | <para>TFTP server - If the board is not connected to a specific |
| 154 | network Enea provides a docker image with TFTP server. Please see | 179 | network, Enea provides a Docker image with a TFTP server. Please |
| 155 | <link linkend="boot_docker">Docker Installer</link> | 180 | see the <link linkend="boot_docker">Docker Installer</link> |
| 156 | subchapter.</para> | 181 | section for further details.</para> |
| 157 | </listitem> | 182 | </listitem> |
| 158 | </itemizedlist> | 183 | </itemizedlist> |
| 159 | 184 | ||
| 160 | <itemizedlist> | 185 | <itemizedlist> |
| 161 | <listitem> | 186 | <listitem> |
| 162 | <para>The board with U-Boot connected to a development host over | 187 | <para>The reference board, with U-Boot connected to a development |
| 163 | serial and ethernet.</para> | 188 | host over serial and ethernet.</para> |
| 164 | </listitem> | 189 | </listitem> |
| 165 | </itemizedlist> | 190 | </itemizedlist> |
| 166 | </section> | 191 | </section> |
| @@ -168,41 +193,52 @@ | |||
| 168 | <section id="boot_docker"> | 193 | <section id="boot_docker"> |
| 169 | <title>Docker Installer</title> | 194 | <title>Docker Installer</title> |
| 170 | 195 | ||
| 171 | <para>Enea provides a suite of tools in order to create a complete | 196 | <para>A suite of tools are provided in order to create a complete boot |
| 172 | boot process setup. System requirements for the development host are | 197 | process setup. System requirements for the development host are |
| 173 | detailed in the Enea NFV Access Release Information document included | 198 | detailed in the <xi:include |
| 174 | with this release. All tools are leveraged by a docker image. The | 199 | href="../../s_docbuild/olinkdb/pardoc-common.xml" |
| 175 | docker image must be built and ran on development host. DHCP and TFTP | 200 | xmlns:xi="http://www.w3.org/2001/XInclude" |
| 176 | servers will be installed and configured in order to facilitate a | 201 | xpointer="element(book_enea_nfv_access_release_info/1)" /> included |
| 177 | RAMDISK boot process on the target. Following is an example of how to | 202 | with this release. </para> |
| 178 | build and run Enea provided docker image. In this case host is | 203 | |
| 179 | directly connected to target on eth1. For more details about docker | 204 | <para>All tools are leveraged by a Docker image, which must be built |
| 180 | installer please see README file from docker installater | 205 | and run on a development host. DHCP and TFTP servers will be installed |
| 181 | folder.</para> | 206 | and configured in order to facilitate a RAMDISK boot process on the |
| 207 | target. </para> | ||
| 208 | |||
| 209 | <para>The example procedure below details how to build and run a | ||
| 210 | provided Docker image. In this case, the host is directly connected to | ||
| 211 | a target on <literal>eth1</literal>. For more details about the Docker | ||
| 212 | installer, please see <filename>README</filename> file from the Docker | ||
| 213 | installer folder. </para> | ||
| 214 | |||
| 215 | <para>Prior to using this example setup, on the target side, U-Boot | ||
| 216 | needs to be configured with these values, before booting Linux. Note | ||
| 217 | that the first eth port (<literal>vnic0</literal>) is connected to a | ||
| 218 | network:</para> | ||
| 219 | |||
| 220 | <programlisting>> setenv ethact vnic0 | ||
| 221 | > setenv gatewayip 192.168.1.1 | ||
| 222 | > setenv serverip 192.168.1.1 | ||
| 223 | > setenv netmask 255.255.255.0 | ||
| 224 | > setenv ipaddr 192.168.1.150</programlisting> | ||
| 225 | |||
| 226 | <para><emphasis role="bold">How to build and run a provided Docker | ||
| 227 | image</emphasis></para> | ||
| 182 | 228 | ||
| 183 | <programlisting>> cd nfv-access-tools/nfv-installer/docker-pxe-ramboot/ | 229 | <programlisting>> cd nfv-access-tools/nfv-installer/docker-pxe-ramboot/ |
| 184 | > mkdir -p ./images | 230 | > mkdir -p ./images |
| 185 | > cp <NFVAccessReleasePath>/Image $(pwd)/images/Image | 231 | > cp <NFVAccessReleasePath>/Image $(pwd)/images/Image |
| 186 | > cp <NFVAccessReleasePath>/enea-nfv-access-cn8304.ext4.gz.u-boot \ | 232 | > cp <NFVAccessReleasePath>/enea-nfv-access-cn8304.ext4.gz.u-boot \ |
| 187 | $(pwd)/images/enea-nfv-access.ext4.gz.u-boot</programlisting> | 233 | $(pwd)/images/enea-nfv-access.ext4.gz.u-boot |
| 188 | 234 | > docker build . -t el_installer | |
| 189 | <programlisting>> docker build . -t el_installer</programlisting> | 235 | > docker run -it --net=host --privileged \ |
| 190 | |||
| 191 | <programlisting>> docker run -it --net=host --privileged \ | ||
| 192 | -v $(pwd)/dhcpd.conf:/etc/dhcp/dhcpd.conf \ | 236 | -v $(pwd)/dhcpd.conf:/etc/dhcp/dhcpd.conf \ |
| 193 | -v $(pwd)/images/Image:/var/lib/tftpboot/Image \ | 237 | -v $(pwd)/images/Image:/var/lib/tftpboot/Image \ |
| 194 | -v $(pwd)/images/enea-nfv-access.ext4.gz.u-boot:/var/lib/tftpboot/enea-nfv-access.ext4.gz.u-boot \ | 238 | -v $(pwd)/images/enea-nfv-access.ext4.gz.u-boot:/var/lib/tftpboot/ \ |
| 195 | el_installer eth1</programlisting> | 239 | enea-nfv-access.ext4.gz.u-boot |
| 196 | 240 | ||
| 197 | <para>Using this setup, on target side, U-Boot need to be configured | 241 | el_installer eth1</programlisting> |
| 198 | as following before starting to boot Linux. It was considered that the | ||
| 199 | first eth port(vnic0) is connected to network:</para> | ||
| 200 | |||
| 201 | <programlisting>> setenv ethact vnic0 | ||
| 202 | > setenv gatewayip 192.168.1.1 | ||
| 203 | > setenv serverip 192.168.1.1 | ||
| 204 | > setenv netmask 255.255.255.0 | ||
| 205 | > setenv ipaddr 192.168.1.150</programlisting> | ||
| 206 | </section> | 242 | </section> |
| 207 | </section> | 243 | </section> |
| 208 | </section> | 244 | </section> |
diff --git a/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml index 76a2568..69057f3 100644 --- a/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml +++ b/doc/book-enea-nfv-access-guide/doc/hypervisor_virtualization.xml | |||
| @@ -4,8 +4,8 @@ | |||
| 4 | <chapter id="hypervisor_virt"> | 4 | <chapter id="hypervisor_virt"> |
| 5 | <title>Hypervisor Virtualization</title> | 5 | <title>Hypervisor Virtualization</title> |
| 6 | 6 | ||
| 7 | <para>The KVM, Kernel-based Virtual Machine, is a virtualization | 7 | <para>The Kernel-based Virtual Machine (KVM), is a virtualization |
| 8 | infrastructure for the Linux kernel which turns it into a hypervisor. KVM | 8 | infrastructure for the Linux kernel, which turns it into a hypervisor. KVM |
| 9 | requires a processor with a hardware virtualization extension.</para> | 9 | requires a processor with a hardware virtualization extension.</para> |
| 10 | 10 | ||
| 11 | <para>KVM uses QEMU, an open source machine emulator and virtualizer, to | 11 | <para>KVM uses QEMU, an open source machine emulator and virtualizer, to |
| @@ -18,15 +18,15 @@ | |||
| 18 | 18 | ||
| 19 | <para>QEMU can make use of KVM when running a target architecture that is | 19 | <para>QEMU can make use of KVM when running a target architecture that is |
| 20 | the same as the host architecture. For instance, when running | 20 | the same as the host architecture. For instance, when running |
| 21 | qemu-system-aarch64 on an aarch64 compatible processor (containing | 21 | <filename>qemu-system-aarch64</filename> on an <literal>aarch64</literal> |
| 22 | virtualization extensions AMD-V or Intel VT), you can take advantage of | 22 | compatible processor with Hardware Virtualization support enabled, you can |
| 23 | the KVM acceleration, giving you benefit for your host and your guest | 23 | take advantage of the KVM acceleration, an added benefit for your host and |
| 24 | system.</para> | 24 | guest system.</para> |
| 25 | 25 | ||
| 26 | <para>Enea NFV Access includes an optimized version of QEMU with KVM-only | 26 | <para>Enea NFV Access includes an optimized version of QEMU with KVM-only |
| 27 | support. To use KVM pass<command> --enable-kvm</command> to QEMU.</para> | 27 | support. To use KVM pass <command>--enable-kvm</command> to QEMU.</para> |
| 28 | 28 | ||
| 29 | <para>The following is an example of starting a guest:</para> | 29 | <para>The following is an example of starting a guest system:</para> |
| 30 | 30 | ||
| 31 | <programlisting>taskset -c 0,1 qemu-system-aarch64 \ | 31 | <programlisting>taskset -c 0,1 qemu-system-aarch64 \ |
| 32 | -cpu host -machine virt,gic_version=3 -smp cores=2,sockets=1 \ | 32 | -cpu host -machine virt,gic_version=3 -smp cores=2,sockets=1 \ |
| @@ -41,63 +41,64 @@ | |||
| 41 | </section> | 41 | </section> |
| 42 | 42 | ||
| 43 | <section id="qemu_boot"> | 43 | <section id="qemu_boot"> |
| 44 | <title>Main QEMU boot options</title> | 44 | <title>Primary QEMU boot options</title> |
| 45 | 45 | ||
| 46 | <para>Below are detailed all the pertinent boot options for the QEMU | 46 | <para>Below are detailed all the pertinent boot options for the QEMU |
| 47 | emulator:</para> | 47 | emulator:</para> |
| 48 | 48 | ||
| 49 | <itemizedlist> | 49 | <itemizedlist> |
| 50 | <listitem> | 50 | <listitem> |
| 51 | <para>SMP - at least 2 cores should be enabled in order to isolate | 51 | <para>SMP - at least 2 cores should be enabled in order to isolate the |
| 52 | application(s) running in virtual machine(s) on specific cores for | 52 | application(s) running in the virtual machine(s), on specific cores, |
| 53 | better performance.</para> | 53 | for better performance:</para> |
| 54 | 54 | ||
| 55 | <programlisting>-smp cores=2,threads=1,sockets=1 \</programlisting> | 55 | <programlisting>-smp cores=2,threads=1,sockets=1 \</programlisting> |
| 56 | </listitem> | 56 | </listitem> |
| 57 | 57 | ||
| 58 | <listitem> | 58 | <listitem> |
| 59 | <para>CPU affinity - associate virtual CPUs with physical CPUs and | 59 | <para>CPU affinity - associate virtual CPUs with physical CPUs, and |
| 60 | optionally assign a default real time priority to the virtual CPU | 60 | optionally assign a default realtime priority to the virtual CPU |
| 61 | process in the host kernel. This option allows you to start qemu vCPUs | 61 | process in the host kernel. This option allows you to start QEMU vCPUs |
| 62 | on isolated physical CPUs.</para> | 62 | on isolated physical CPUs:</para> |
| 63 | 63 | ||
| 64 | <programlisting>-vcpu 0,affinity=0 \</programlisting> | 64 | <programlisting>-vcpu 0,affinity=0 \</programlisting> |
| 65 | </listitem> | 65 | </listitem> |
| 66 | 66 | ||
| 67 | <listitem> | 67 | <listitem> |
| 68 | <para>Hugepages - KVM guests can be deployed with huge page memory | 68 | <para>Hugepages - KVM guests can be deployed with hugepage memory |
| 69 | support in order to reduce memory consumption and improve performance, | 69 | support to reduce memory consumption and improve performance, by |
| 70 | by reducing CPU cache usage. By using huge pages for a KVM guest, less | 70 | reducing CPU cache usage. By using hugepages for a KVM guest, less |
| 71 | memory is used for page tables and TLB (Translation Lookaside Buffer) | 71 | memory is used for page tables and TLB (Translation Lookaside Buffer) |
| 72 | misses are reduced, thereby significantly increasing performance, | 72 | misses are reduced, significantly increasing performance, especially |
| 73 | especially for memory-intensive situations.</para> | 73 | for memory-intensive situations.</para> |
| 74 | 74 | ||
| 75 | <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \</programlisting> | 75 | <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \</programlisting> |
| 76 | </listitem> | 76 | </listitem> |
| 77 | 77 | ||
| 78 | <listitem> | 78 | <listitem> |
| 79 | <para>Memory preallocation - preallocate huge pages at startup time | 79 | <para>Memory preallocation - preallocate hugepages at startup to |
| 80 | can improve performance but it may affect the qemu boot time.</para> | 80 | improve performance. This may affect QEMU boot time.</para> |
| 81 | 81 | ||
| 82 | <programlisting>-mem-prealloc \</programlisting> | 82 | <programlisting>-mem-prealloc \</programlisting> |
| 83 | </listitem> | 83 | </listitem> |
| 84 | 84 | ||
| 85 | <listitem> | 85 | <listitem> |
| 86 | <para>Enable realtime characteristics - run qemu with realtime | 86 | <para>Enable realtime characteristics - run QEMU with realtime |
| 87 | features. While that mildly implies that "-realtime" alone might do | 87 | features.</para> |
| 88 | something, it's just an identifier for options that are partially | 88 | |
| 89 | realtime. If you're running in a realtime or low latency environment, | 89 | <para>In this case, "realtime" is just an identifier for options that |
| 90 | you don't want your pages to be swapped out and mlock does that, thus | 90 | are partially realtime. If you're running in a realtime or low latency |
| 91 | mlock=on. If you want VM density, then you may want swappable VMs, | 91 | environment, and you don't want your pages to be swapped out, this can |
| 92 | thus mlock=off.</para> | 92 | be ensured by using <command>mlock=on</command>. If you want VM |
| 93 | density, then you may want swappable VMs, this can be done with | ||
| 94 | <command>mlock=off</command>.</para> | ||
| 93 | 95 | ||
| 94 | <programlisting>-realtime mlock=on \</programlisting> | 96 | <programlisting>-realtime mlock=on \</programlisting> |
| 95 | </listitem> | 97 | </listitem> |
| 96 | </itemizedlist> | 98 | </itemizedlist> |
| 97 | 99 | ||
| 98 | <para>If the hardware does not have an IOMMU (known as "Intel VT-d" on | 100 | <para>If the hardware does not have an IOMMU, it will not be possible to |
| 99 | Intel-based machines and "AMD I/O Virtualization Technology" on AMD-based | 101 | assign devices in KVM.</para> |
| 100 | machines), it will not be possible to assign devices in KVM. </para> | ||
| 101 | </section> | 102 | </section> |
| 102 | 103 | ||
| 103 | <section id="net_in_guest"> | 104 | <section id="net_in_guest"> |
| @@ -106,15 +107,17 @@ | |||
| 106 | <section id="vhost-user-support"> | 107 | <section id="vhost-user-support"> |
| 107 | <title>Using vhost-user support</title> | 108 | <title>Using vhost-user support</title> |
| 108 | 109 | ||
| 109 | <para>The goal of vhost-user is to implement a Virtio transport, staying | 110 | <para>The goal of <literal>vhost-user</literal> is to implement a Virtio |
| 110 | as close as possible to the vhost paradigm of using shared memory, | 111 | transport, staying as close as possible to the <literal>vhost</literal> |
| 111 | ioeventfds and irqfds. A UNIX domain socket based mechanism allows the | 112 | paradigm of using: shared memory, <literal>ioeventfds</literal> and |
| 112 | set up of resources used by a number of Vrings shared between two | 113 | <literal>irqfds</literal>. A UNIX domain socket based mechanism, allows |
| 113 | userspace processes, which will be placed in shared memory.</para> | 114 | for the set up of resources used by various <literal>Vrings</literal> |
| 115 | shared between two userspace processes, and will be placed in shared | ||
| 116 | memory.</para> | ||
| 114 | 117 | ||
| 115 | <para>To run QEMU with the vhost-user backend, you have to provide the | 118 | <para>To run QEMU with the <literal>vhost-user</literal> backend, you |
| 116 | named UNIX domain socket which needs to be already opened by the | 119 | have to provide the named UNIX domain socket, which needs to already be |
| 117 | backend:</para> | 120 | opened by the backend:</para> |
| 118 | 121 | ||
| 119 | <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \ | 122 | <programlisting>-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \ |
| 120 | -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \ | 123 | -chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \ |
| @@ -123,14 +126,15 @@ | |||
| 123 | 126 | ||
| 124 | <para>The vHost User standard uses a client-server model. The server | 127 | <para>The vHost User standard uses a client-server model. The server |
| 125 | creates and manages the vHost User sockets and the client connects to | 128 | creates and manages the vHost User sockets and the client connects to |
| 126 | the sockets created by the server. It is recommended to use QEMU as | 129 | the sockets created by the server. It is recommended to use QEMU as the |
| 127 | server so the vhost-user client can be restarted without affecting the | 130 | server, so that the <literal>vhost-user</literal> client can be |
| 128 | server, otherwise if the server side dies all clients need to be | 131 | restarted without affecting the server, otherwise if the server side |
| 129 | restarted.</para> | 132 | dies, all clients need to be restarted.</para> |
| 130 | 133 | ||
| 131 | <para>Using vhost-user in QEMU as server will offer the flexibility to | 134 | <para>Using <literal>vhost-user</literal> in QEMU will offer the |
| 132 | stop and start the virtual machine with no impact on virtual switch from | 135 | flexibility to stop and start the virtual machine with no impact on the |
| 133 | the host (vhost-user-client).</para> | 136 | virtual switch from the host |
| 137 | (<literal>vhost-user-client</literal>).</para> | ||
| 134 | 138 | ||
| 135 | <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1,server \</programlisting> | 139 | <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1,server \</programlisting> |
| 136 | </section> | 140 | </section> |
| @@ -154,17 +158,6 @@ | |||
| 154 | to appear and behave as if they were physically attached to the guest | 158 | to appear and behave as if they were physically attached to the guest |
| 155 | operating system.</para> | 159 | operating system.</para> |
| 156 | 160 | ||
| 157 | <para>Preparing a system for PCI passthrough:</para> | ||
| 158 | |||
| 159 | <itemizedlist> | ||
| 160 | <listitem> | ||
| 161 | <para>Allow unsafe interrupts in case the system doesn't support | ||
| 162 | interrupt remapping. This can be done using | ||
| 163 | <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as a | ||
| 164 | boot kernel parameter.</para> | ||
| 165 | </listitem> | ||
| 166 | </itemizedlist> | ||
| 167 | |||
| 168 | <para>Create guest with direct passthrough via VFIO framework like | 161 | <para>Create guest with direct passthrough via VFIO framework like |
| 169 | so:</para> | 162 | so:</para> |
| 170 | 163 | ||
| @@ -181,6 +174,13 @@ $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1</programlisting> | |||
| 181 | url="http://dpdk.org/doc/guides/nics/thunderx.html">http://dpdk.org/doc/guides/nics/thunderx.html</ulink>.</para> | 174 | url="http://dpdk.org/doc/guides/nics/thunderx.html">http://dpdk.org/doc/guides/nics/thunderx.html</ulink>.</para> |
| 182 | </section> | 175 | </section> |
| 183 | 176 | ||
| 177 | <section> | ||
| 178 | <title>Enable VFIO-NOIOMMU mode</title> | ||
| 179 | |||
| 180 | <para>In order to run a DPDK application in VM, the VFIO-NOIOMMU needs | ||
| 181 | to be set: <programlisting>echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode</programlisting></para> | ||
| 182 | </section> | ||
| 183 | |||
| 184 | <section id="multiqueue"> | 184 | <section id="multiqueue"> |
| 185 | <title>Multi-queue</title> | 185 | <title>Multi-queue</title> |
| 186 | 186 | ||
| @@ -188,7 +188,10 @@ $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1</programlisting> | |||
| 188 | of vCPUs increases, multi-queue support can be used in QEMU.</para> | 188 | of vCPUs increases, multi-queue support can be used in QEMU.</para> |
| 189 | 189 | ||
| 190 | <section id="qemu-multiqueue-support"> | 190 | <section id="qemu-multiqueue-support"> |
| 191 | <title>QEMU multi queue support configuration</title> | 191 | <title>QEMU multi-queue support configuration</title> |
| 192 | |||
| 193 | <para>Below is an example of how to set up the QEMU multi-queue | ||
| 194 | support configuration:</para> | ||
| 192 | 195 | ||
| 193 | <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \ | 196 | <programlisting>-chardev socket,id=char0,path=/var/run/openvswitch/vhost-user1 \ |
| 194 | -netdev type=vhost-user,id=net0,chardev=char0,queues=2 \ | 197 | -netdev type=vhost-user,id=net0,chardev=char0,queues=2 \ |
| @@ -199,15 +202,16 @@ where vectors is calculated as: 2 + 2 * queues number.</programlisting> | |||
| 199 | <section id="inside-guest"> | 202 | <section id="inside-guest"> |
| 200 | <title>Inside guest</title> | 203 | <title>Inside guest</title> |
| 201 | 204 | ||
| 202 | <para>Linux kernel virtio-net driver (one queue is enabled by | 205 | <para>The Linux kernel <filename>virtio-net</filename> driver, where |
| 203 | default):</para> | 206 | one queue is enabled by default:</para> |
| 204 | 207 | ||
| 205 | <programlisting>$ ethtool -L combined 2 eth0 | 208 | <programlisting>$ ethtool -L combined 2 eth0 |
| 206 | DPDK Virtio PMD | 209 | DPDK Virtio PMD |
| 207 | $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | 210 | $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> |
| 208 | 211 | ||
| 209 | <para>For QEMU documentation, see: <ulink | 212 | <para>For QEMU documentation, see: <ulink |
| 210 | url="https://qemu.weilnetz.de/doc/qemu-doc.html">https://qemu.weilnetz.de/doc/qemu-doc.html</ulink>.</para> | 213 | url="https://qemu.weilnetz.de/doc/qemu-doc.html">QEMU User |
| 214 | Documentation</ulink>.</para> | ||
| 211 | </section> | 215 | </section> |
| 212 | </section> | 216 | </section> |
| 213 | </section> | 217 | </section> |
| @@ -217,22 +221,24 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 217 | 221 | ||
| 218 | <para>One way to manage guests in Enea NFV Access is by using | 222 | <para>One way to manage guests in Enea NFV Access is by using |
| 219 | <literal>libvirt</literal>. Libvirt is used in conjunction with a daemon | 223 | <literal>libvirt</literal>. Libvirt is used in conjunction with a daemon |
| 220 | (<literal>libvirtd</literal>) and a command line utility (virsh) to manage | 224 | (<literal>libvirtd</literal>) and a command line utility |
| 221 | virtualized environments.</para> | 225 | (<literal>virsh</literal>) to manage virtualized environments.</para> |
| 222 | 226 | ||
| 223 | <para>The libvirt library is a hypervisor-independent virtualization API | 227 | <para>The <literal>libvirt</literal> library is a hypervisor-independent |
| 224 | and toolkit that is able to interact with the virtualization capabilities | 228 | virtualization API and toolkit that is able to interact with the |
| 225 | of a range of operating systems. Libvirt provides a common, generic and | 229 | virtualization capabilities of a range of operating systems. |
| 226 | stable layer to securely manage domains on a node. As nodes may be | 230 | <literal>Libvirt</literal> provides a common, generic and stable layer to |
| 227 | remotely located, libvirt provides all methods required to provision, | 231 | securely manage domains on a node. As nodes may be remotely located, it |
| 228 | create, modify, monitor, control, migrate and stop the domains, within the | 232 | provides all methods required to provision, create, modify, monitor, |
| 229 | limits of hypervisor support for these operations.</para> | 233 | control, migrate and stop the domains, within the limits of hypervisor |
| 230 | 234 | support for these operations.</para> | |
| 231 | <para>The libvirt daemon runs on the Enea NFV Access host. All tools built | 235 | |
| 232 | on libvirt API connect to the daemon to request the desired operation, and | 236 | <para>The <literal>libvirt</literal> daemon runs on the Enea NFV Access |
| 233 | to collect information about the configuration and resources of the host | 237 | host. All tools built upon the libvirt API, connect to the daemon to |
| 234 | system and guests. <literal>virsh</literal> is a command line interface | 238 | request the desired operation, and to collect information about the |
| 235 | tool for managing guests and the hypervisor. The virsh tool is built on | 239 | configuration and resources of the host system and guests. |
| 240 | <literal>virsh</literal> is a command line interface tool for managing | ||
| 241 | guests and the hypervisor. The <literal>virsh</literal> tool is built upon | ||
| 236 | the libvirt management API.</para> | 242 | the libvirt management API.</para> |
| 237 | 243 | ||
| 238 | <para><emphasis role="bold">Major functionality provided by | 244 | <para><emphasis role="bold">Major functionality provided by |
| @@ -247,7 +253,7 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 247 | <para><emphasis role="bold">VM management:</emphasis> Various domain | 253 | <para><emphasis role="bold">VM management:</emphasis> Various domain |
| 248 | lifecycle operations such as start, stop, pause, save, restore, and | 254 | lifecycle operations such as start, stop, pause, save, restore, and |
| 249 | migrate. Hotplug operations for many device types including disk and | 255 | migrate. Hotplug operations for many device types including disk and |
| 250 | network interfaces, memory, and cpus.</para> | 256 | network interfaces, memory, and CPUs.</para> |
| 251 | </listitem> | 257 | </listitem> |
| 252 | 258 | ||
| 253 | <listitem> | 259 | <listitem> |
| @@ -391,7 +397,8 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 391 | <listitem> | 397 | <listitem> |
| 392 | <para>In the <literal><cputune></literal> tag it is further | 398 | <para>In the <literal><cputune></literal> tag it is further |
| 393 | possible to specify on which CPU the emulator shall run by adding | 399 | possible to specify on which CPU the emulator shall run by adding |
| 394 | the cpuset to the <literal><emulatorpin></literal> tag.</para> | 400 | the <literal>cpuset</literal> to the |
| 401 | <literal><emulatorpin></literal> tag.</para> | ||
| 395 | 402 | ||
| 396 | <programlisting><vcpu placement='static'>2</vcpu> | 403 | <programlisting><vcpu placement='static'>2</vcpu> |
| 397 | <cputune> | 404 | <cputune> |
| @@ -401,7 +408,7 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 401 | </cputune></programlisting> | 408 | </cputune></programlisting> |
| 402 | 409 | ||
| 403 | <para><literal>libvirt</literal> will group all threads belonging to | 410 | <para><literal>libvirt</literal> will group all threads belonging to |
| 404 | a qemu instance into cgroups that will be created for that purpose. | 411 | a QEMU instance into cgroups that will be created for that purpose. |
| 405 | It is possible to supply a base name for those cgroups using the | 412 | It is possible to supply a base name for those cgroups using the |
| 406 | <literal><resource></literal> tag:</para> | 413 | <literal><resource></literal> tag:</para> |
| 407 | 414 | ||
| @@ -418,8 +425,8 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 418 | <para>Command <command>virsh net-create</command> starts a network. If | 425 | <para>Command <command>virsh net-create</command> starts a network. If |
| 419 | any networks are listed in the guest XML file, those networks must be | 426 | any networks are listed in the guest XML file, those networks must be |
| 420 | started before the guest is started. As an example, if the network is | 427 | started before the guest is started. As an example, if the network is |
| 421 | defined in a file named example-net.xml, it is started as | 428 | defined in a file named <filename>example-net.xml</filename>, it will be |
| 422 | follows:</para> | 429 | started as such:</para> |
| 423 | 430 | ||
| 424 | <programlisting>virsh net-create example-net.xml | 431 | <programlisting>virsh net-create example-net.xml |
| 425 | <network> | 432 | <network> |
| @@ -448,25 +455,26 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 448 | 455 | ||
| 449 | <programlisting>virsh net-autostart example-net</programlisting> | 456 | <programlisting>virsh net-autostart example-net</programlisting> |
| 450 | 457 | ||
| 451 | <para>Guest configuration file (xml) must be updated to access newly | 458 | <para>The guest configuration file (xml) must be updated to access the |
| 452 | created network like so:</para> | 459 | newly created network like so:</para> |
| 453 | 460 | ||
| 454 | <programlisting> <interface type='network'> | 461 | <programlisting> <interface type='network'> |
| 455 | <source network='sriov'/> | 462 | <source network='sriov'/> |
| 456 | </interface></programlisting> | 463 | </interface></programlisting> |
| 457 | 464 | ||
| 458 | <para>The following presented here are a few modes of network access | 465 | <para>The following are a few ways of network access from a guest while |
| 459 | from guest using <command>virsh</command>:</para> | 466 | using <command>virsh</command>:</para> |
| 460 | 467 | ||
| 461 | <itemizedlist> | 468 | <itemizedlist> |
| 462 | <listitem> | 469 | <listitem> |
| 463 | <para><emphasis role="bold">vhost-user interface</emphasis></para> | 470 | <para><emphasis role="bold">vhost-user interface</emphasis></para> |
| 464 | 471 | ||
| 465 | <para>See the Open vSwitch chapter on how to create vhost-user | 472 | <para>See the Open vSwitch chapter on how to create a |
| 466 | interface using Open vSwitch. Currently there is no Open vSwitch | 473 | <literal>vhost-user</literal> interface using Open vSwitch. |
| 467 | support for networks that are managed by libvirt (e.g. NAT). As of | 474 | Currently there is no Open vSwitch support for networks that are |
| 468 | now, only bridged networks are supported (those where the user has | 475 | managed by <literal>libvirt </literal>(e.g. NAT). Until further |
| 469 | to manually create the bridge).</para> | 476 | notice, only bridged networks are supported (those where the user |
| 477 | has to manually create the bridge).</para> | ||
| 470 | 478 | ||
| 471 | <programlisting> <qemu:commandline> | 479 | <programlisting> <qemu:commandline> |
| 472 | <qemu:arg value='-chardev'/> | 480 | <qemu:arg value='-chardev'/> |
| @@ -474,7 +482,8 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 474 | <qemu:arg value='-netdev'/> | 482 | <qemu:arg value='-netdev'/> |
| 475 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> | 483 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> |
| 476 | <qemu:arg value='-device'/> | 484 | <qemu:arg value='-device'/> |
| 477 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> | 485 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,/ |
| 486 | bus=pcie.0,addr=0x2'/> | ||
| 478 | </qemu:commandline></programlisting> | 487 | </qemu:commandline></programlisting> |
| 479 | </listitem> | 488 | </listitem> |
| 480 | 489 | ||
| @@ -492,15 +501,8 @@ $ testpmd -c 0x7 -- -i --rxq=2 --txq=2 --nb-cores=2 ...</programlisting> | |||
| 492 | 501 | ||
| 493 | <itemizedlist> | 502 | <itemizedlist> |
| 494 | <listitem> | 503 | <listitem> |
| 495 | <para>Allow unsafe interrupts in case the system doesn't support | ||
| 496 | interrupt remapping. This can be done using | ||
| 497 | <literal>vfio_iommu_type1.allow_unsafe_interrupts=1</literal> as | ||
| 498 | a boot kernel parameter.</para> | ||
| 499 | </listitem> | ||
| 500 | |||
| 501 | <listitem> | ||
| 502 | <para>Change the owner of the | 504 | <para>Change the owner of the |
| 503 | <literal>/dev/vfio/<group></literal> to qemu and edit | 505 | <literal>/dev/vfio/<group></literal> to QEMU and edit |
| 504 | <literal>/etc/libvirt/qemu.conf</literal> to explicitly allow | 506 | <literal>/etc/libvirt/qemu.conf</literal> to explicitly allow |
| 505 | permission to it:</para> | 507 | permission to it:</para> |
| 506 | 508 | ||
| @@ -522,7 +524,7 @@ cgroup_device_acl = [ | |||
| 522 | 524 | ||
| 523 | <listitem> | 525 | <listitem> |
| 524 | <para>Increase the locked memory limits within the libvirtd | 526 | <para>Increase the locked memory limits within the libvirtd |
| 525 | service file :</para> | 527 | service file:</para> |
| 526 | 528 | ||
| 527 | <para><programlisting>$ cat /lib/systemd/system/libvirtd.service | 529 | <para><programlisting>$ cat /lib/systemd/system/libvirtd.service |
| 528 | ... | 530 | ... |
| @@ -539,17 +541,19 @@ Restart=on-failure | |||
| 539 | #LimitNOFILE=2048 | 541 | #LimitNOFILE=2048 |
| 540 | ...</programlisting></para> | 542 | ...</programlisting></para> |
| 541 | </listitem> | 543 | </listitem> |
| 542 | </itemizedlist> | ||
| 543 | 544 | ||
| 544 | <para>VFs must be created on the host before starting the | 545 | <listitem> |
| 545 | guest:</para> | 546 | <para>VFs must be created on the host before starting the |
| 547 | guest:</para> | ||
| 546 | 548 | ||
| 547 | <programlisting>$ modprobe vfio_pci | 549 | <programlisting>$ modprobe vfio_pci |
| 548 | $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1 | 550 | $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1 |
| 549 | <qemu:commandline> | 551 | <qemu:commandline> |
| 550 | <qemu:arg value='-device'/> | 552 | <qemu:arg value='-device'/> |
| 551 | <qemu:arg value='vfio-pci,host=0001:01:00.1'/> | 553 | <qemu:arg value='vfio-pci,host=0001:01:00.1'/> |
| 552 | </qemu:commandline></programlisting> | 554 | </qemu:commandline></programlisting> |
| 555 | </listitem> | ||
| 556 | </itemizedlist> | ||
| 553 | </listitem> | 557 | </listitem> |
| 554 | 558 | ||
| 555 | <listitem> | 559 | <listitem> |
| @@ -632,7 +636,8 @@ $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1 | |||
| 632 | <qemu:arg value='-netdev'/> | 636 | <qemu:arg value='-netdev'/> |
| 633 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> | 637 | <qemu:arg value='type=vhost-user,id=hostnet0,chardev=charnet0'/> |
| 634 | <qemu:arg value='-device'/> | 638 | <qemu:arg value='-device'/> |
| 635 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,bus=pcie.0,addr=0x2'/> | 639 | <qemu:arg value='virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:00:00:01,/ |
| 640 | bus=pcie.0,addr=0x2'/> | ||
| 636 | </qemu:commandline> | 641 | </qemu:commandline> |
| 637 | </domain></programlisting> | 642 | </domain></programlisting> |
| 638 | </section> | 643 | </section> |
| @@ -760,4 +765,4 @@ $ dpdk-devbind.py --bind=vfio-pci 0001:01:00.1 | |||
| 760 | </section> | 765 | </section> |
| 761 | </section> | 766 | </section> |
| 762 | </section> | 767 | </section> |
| 763 | </chapter> | 768 | </chapter> \ No newline at end of file |
diff --git a/doc/book-enea-nfv-access-guide/doc/overview.xml b/doc/book-enea-nfv-access-guide/doc/overview.xml index d81087a..238345a 100644 --- a/doc/book-enea-nfv-access-guide/doc/overview.xml +++ b/doc/book-enea-nfv-access-guide/doc/overview.xml | |||
| @@ -1,37 +1,36 @@ | |||
| 1 | <?xml version="1.0" encoding="UTF-8"?> | 1 | <?xml version="1.0" encoding="ISO-8859-1"?> |
| 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" | 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" |
| 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> | 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> |
| 4 | <chapter id="overview"> | 4 | <chapter id="overview"> |
| 5 | <title>Overview</title> | 5 | <title>Overview</title> |
| 6 | 6 | ||
| 7 | <para>The Enea NFV Access Guide available with this release of the Enea NFV | 7 | <para>The Enea NFV Access Guide available with this release seeks to provide |
| 8 | Access seeks to provide further information that will help all intended | 8 | further information that will help all intended users make the most out of |
| 9 | users make the most out of the virtualization features.</para> | 9 | the virtualization features.</para> |
| 10 | 10 | ||
| 11 | <section id="description"> | 11 | <section id="description"> |
| 12 | <title>Enea NFV Access Description</title> | 12 | <title>Enea NFV Access Description</title> |
| 13 | 13 | ||
| 14 | <para>Enea NFV Access is a lightweight virtualization software designed | 14 | <para>Enea NFV Access is a lightweight virtualization software designed |
| 15 | for deployment on edge devices at customer premises. Streamlined for high | 15 | for deployment on edge devices at customer premises. It has been |
| 16 | networking performance and minimal footprints for both host platform and | 16 | streamlined for high networking performance and low footprints for both |
| 17 | VNFs, it enables very high compute density.</para> | 17 | host platforms and VNFs, enabling high compute density. This software also |
| 18 | 18 | provides a foundation for vCPE agility and innovation, reducing cost and | |
| 19 | <para>Enea NFV Access also provides a foundation for vCPE agility and | 19 | complexity for computing at the network edge.</para> |
| 20 | innovation, reducing cost and complexity for computing at the network | 20 | |
| 21 | edge. It supports multiple architectures and scales from small white box | 21 | <para>Enea NFV Access supports multiple architectures and scales from |
| 22 | edge devices up to high-end network servers. Thanks to the streamlined | 22 | small white box edge devices up to high-end network servers. It can also |
| 23 | footprint, Enea NFV Access can be deployed on systems as small as single | 23 | be deployed on systems as small as single 2-core ARM devices. It scales up |
| 24 | 2-core ARM devices. It scales up to clustered 24 core OCTEON TX™ ARM | 24 | to clustered 24 core OCTEON TX™ ARM Micro-servers and beyond, |
| 25 | Micro-servers and beyond, allowing multiple VNFs on the same machine, and | 25 | allowing multiple VNFs on the same machine, and eliminating the need to |
| 26 | eliminating the need to use different virtualization software for | 26 | use different virtualization software for different hardware platforms, |
| 27 | different hardware platforms, saving costs through single source | 27 | saving costs through single source provisioning.</para> |
| 28 | provisioning.</para> | ||
| 29 | 28 | ||
| 30 | <para>Optimized virtual networking performance provides low virtualized | 29 | <para>Optimized virtual networking performance provides low virtualized |
| 31 | networking latency, high virtualized networking throughput (10 Gb wire | 30 | networking latency, high virtualized networking throughput (10 Gb wire |
| 32 | speed), and low processing overhead. It allows high compute density on | 31 | speed), and low processing overhead. It allows high compute density on |
| 33 | white box hardware, maintaining performance when moving functionality from | 32 | white box hardware, maintaining performance when moving functionality from |
| 34 | application specific appliances to software on standard hardware. The | 33 | application specific appliances, to software on standard hardware. The |
| 35 | optimized boot speed minimizes the time from reboot to active services, | 34 | optimized boot speed minimizes the time from reboot to active services, |
| 36 | improving availability.</para> | 35 | improving availability.</para> |
| 37 | 36 | ||
| @@ -39,33 +38,29 @@ | |||
| 39 | virtual machines. Containers provide lightweight virtualization for a | 38 | virtual machines. Containers provide lightweight virtualization for a |
| 40 | smaller VNF footprint and a very short time interval from start to enabled | 39 | smaller VNF footprint and a very short time interval from start to enabled |
| 41 | network services. VMs provide virtualization with secure VNF sandboxing | 40 | network services. VMs provide virtualization with secure VNF sandboxing |
| 42 | and is the preferred virtualization method for OPNFV compliance. Enea NFV | 41 | and are the preferred virtualization method for OPNFV compliance. Enea NFV |
| 43 | Access allows combinations of containers and VMs for highest possible user | 42 | Access allows combinations of containers and VMs for the highest possible |
| 44 | adaptability.</para> | 43 | user adaptability.</para> |
| 45 | 44 | ||
| 46 | <para>Flexible interfaces for VNF lifecycle management and service | 45 | <para>Flexible interfaces for VNF lifecycle management and service |
| 47 | function chaining, are important to allow a smooth transition from | 46 | function chaining, are important to allow for a smooth transition from |
| 48 | traditional network appliances to virtualized network functions in | 47 | traditional network appliances to virtualized network functions in |
| 49 | existing networks, as they plug into a variety of interfaces. Enea NFV | 48 | existing networks. Enea NFV Access supports VNF lifecycle management and |
| 50 | Access supports VNF lifecycle management and service function chaining | 49 | service function chaining through NETCONF, REST, CLI and Docker. It |
| 51 | through NETCONF, REST, CLI and Docker. It integrates a powerful device | 50 | integrates a powerful device management framework that enables full FCAPS |
| 52 | management framework that enables full FCAPS functionality for powerful | 51 | functionality for powerful management of the platform.</para> |
| 53 | management of the platform.</para> | ||
| 54 | 52 | ||
| 55 | <para>Building on open source, Enea NFV Access prevents vendor lock-in | 53 | <para>Building on open source, Enea NFV Access prevents vendor lock-in |
| 56 | thanks to its completely open standards and interfaces. Unlike proprietary | 54 | thanks to its completely open standards and interfaces. It includes |
| 57 | platforms that either do not allow decoupling of software from hardware, | 55 | optimized components with open interfaces to allow full portability and |
| 58 | or prevent NVF portability, Enea NFV Access includes optimized components | ||
| 59 | with open interfaces to allow full portability and | ||
| 60 | interoperability.</para> | 56 | interoperability.</para> |
| 61 | </section> | 57 | </section> |
| 62 | 58 | ||
| 63 | <section id="components"> | 59 | <section id="components"> |
| 64 | <title>Components</title> | 60 | <title>Components</title> |
| 65 | 61 | ||
| 66 | <para>Enea NFV Access is built on highly optimized open source and | 62 | <para>Enea NFV Access is built on highly optimized open source components |
| 67 | value-adding components that provide standard interfaces but with boosted | 63 | that provide standard interfaces with boosted performance.</para> |
| 68 | performance.</para> | ||
| 69 | 64 | ||
| 70 | <mediaobject> | 65 | <mediaobject> |
| 71 | <imageobject> | 66 | <imageobject> |
| @@ -88,9 +83,8 @@ | |||
| 88 | </listitem> | 83 | </listitem> |
| 89 | 84 | ||
| 90 | <listitem> | 85 | <listitem> |
| 91 | <para>Docker - Docker provides a lightweight configuration using | 86 | <para>Docker - Provides a lightweight configuration using containers. |
| 92 | containers. Docker is the standard platform for container | 87 | Docker is the standard platform for container virtualization.</para> |
| 93 | virtualization.</para> | ||
| 94 | </listitem> | 88 | </listitem> |
| 95 | 89 | ||
| 96 | <listitem> | 90 | <listitem> |
| @@ -99,9 +93,8 @@ | |||
| 99 | </listitem> | 93 | </listitem> |
| 100 | 94 | ||
| 101 | <listitem> | 95 | <listitem> |
| 102 | <para>Edge Link - Edge Link provides interfaces to orchestration for | 96 | <para>Edge Link - Provides interfaces to orchestration for centralized |
| 103 | centralized VNF lifecycle management and service function | 97 | VNF lifecycle management and service function chaining:</para> |
| 104 | chaining:</para> | ||
| 105 | 98 | ||
| 106 | <orderedlist> | 99 | <orderedlist> |
| 107 | <listitem> | 100 | <listitem> |
| @@ -129,19 +122,20 @@ | |||
| 129 | </listitem> | 122 | </listitem> |
| 130 | 123 | ||
| 131 | <listitem> | 124 | <listitem> |
| 132 | <para>CLI based VNF management - CLI access over virsh and | 125 | <para>CLI based VNF management - CLI access over |
| 133 | libvirt.</para> | 126 | <literal><filename>virsh</filename></literal> and |
| 127 | <filename>libvirt</filename>.</para> | ||
| 134 | </listitem> | 128 | </listitem> |
| 135 | 129 | ||
| 136 | <listitem> | 130 | <listitem> |
| 137 | <para>FCAPS framework - The device management framework for managing | 131 | <para>FCAPS framework - Device management framework for managing the |
| 138 | the platform is capable of providing full FCAPS functionality to | 132 | platform, capable of providing full FCAPS functionality to |
| 139 | orchestration or network management systems.</para> | 133 | orchestration or network management systems.</para> |
| 140 | </listitem> | 134 | </listitem> |
| 141 | 135 | ||
| 142 | <listitem> | 136 | <listitem> |
| 143 | <para>Data plane - High performance data plane that includes the | 137 | <para>Data plane - High performance data plane that includes the DPDK |
| 144 | following optimized data plane DPDK driver.</para> | 138 | optimized data plane driver.</para> |
| 145 | </listitem> | 139 | </listitem> |
| 146 | </itemizedlist> | 140 | </itemizedlist> |
| 147 | </section> | 141 | </section> |
diff --git a/doc/book-enea-nfv-access-guide/doc/ovs.xml b/doc/book-enea-nfv-access-guide/doc/ovs.xml index fdbd692..de14f76 100644 --- a/doc/book-enea-nfv-access-guide/doc/ovs.xml +++ b/doc/book-enea-nfv-access-guide/doc/ovs.xml | |||
| @@ -97,8 +97,9 @@ | |||
| 97 | <section id="setup-ovs-dpdk"> | 97 | <section id="setup-ovs-dpdk"> |
| 98 | <title>How to set up OVS-DPDK</title> | 98 | <title>How to set up OVS-DPDK</title> |
| 99 | 99 | ||
| 100 | <para>The DPDK must be configured prior to setting up OVS-DPDK. See | 100 | <para>The DPDK must be configured prior to setting up OVS-DPDK. See <xref |
| 101 | <xref linkend="dpdk-setup"/> for DPDK setup instructions, then follow these steps:</para> | 101 | linkend="dpdk-setup" /> for DPDK setup instructions, then follow these |
| 102 | steps:</para> | ||
| 102 | 103 | ||
| 103 | <orderedlist> | 104 | <orderedlist> |
| 104 | <listitem> | 105 | <listitem> |
| @@ -114,7 +115,7 @@ rm -f /etc/openvswitch/conf.db</programlisting> | |||
| 114 | 115 | ||
| 115 | <programlisting>export DB_SOCK=/var/run/openvswitch/db.sock | 116 | <programlisting>export DB_SOCK=/var/run/openvswitch/db.sock |
| 116 | ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema | 117 | ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema |
| 117 | ovsdb-server --remote=punix:$DB_SOCK / | 118 | ovsdb-server --remote=punix:$DB_SOCK \ |
| 118 | --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach</programlisting> | 119 | --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach</programlisting> |
| 119 | </listitem> | 120 | </listitem> |
| 120 | 121 | ||
| @@ -125,7 +126,7 @@ ovsdb-server --remote=punix:$DB_SOCK / | |||
| 125 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1 | 126 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x1 |
| 126 | ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc | 127 | ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xc |
| 127 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true | 128 | ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true |
| 128 | ovs-vswitchd unix:$DB_SOCK --pidfile --detach / | 129 | ovs-vswitchd unix:$DB_SOCK --pidfile --detach \ |
| 129 | --log-file=/var/log/openvswitch/ovs-vswitchd.log</programlisting> | 130 | --log-file=/var/log/openvswitch/ovs-vswitchd.log</programlisting> |
| 130 | </listitem> | 131 | </listitem> |
| 131 | 132 | ||
| @@ -133,7 +134,7 @@ ovs-vswitchd unix:$DB_SOCK --pidfile --detach / | |||
| 133 | <para>Create the OVS bridge and attach ports:</para> | 134 | <para>Create the OVS bridge and attach ports:</para> |
| 134 | 135 | ||
| 135 | <programlisting>ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev | 136 | <programlisting>ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev |
| 136 | ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk / | 137 | ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk \ |
| 137 | :dpdk-devargs=<PCI device></programlisting> | 138 | :dpdk-devargs=<PCI device></programlisting> |
| 138 | </listitem> | 139 | </listitem> |
| 139 | 140 | ||
| @@ -144,8 +145,8 @@ ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk / | |||
| 144 | 145 | ||
| 145 | <para>This command creates a socket at | 146 | <para>This command creates a socket at |
| 146 | <literal>/var/run/openvswitch/vhost-user1</literal>, which can be | 147 | <literal>/var/run/openvswitch/vhost-user1</literal>, which can be |
| 147 | provided to the VM on the QEMU command line. See <xref linkend="net_in_guest"/> for | 148 | provided to the VM on the QEMU command line. See <xref |
| 148 | details.</para> | 149 | linkend="net_in_guest" /> for details.</para> |
| 149 | </listitem> | 150 | </listitem> |
| 150 | 151 | ||
| 151 | <listitem> | 152 | <listitem> |
diff --git a/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml b/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml index e550005..1f7ab8c 100644 --- a/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml +++ b/doc/book-enea-nfv-access-guide/doc/using_nfv_access_sdks.xml | |||
| @@ -1,4 +1,4 @@ | |||
| 1 | <?xml version="1.0" encoding="UTF-8"?> | 1 | <?xml version="1.0" encoding="ISO-8859-1"?> |
| 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" | 2 | <!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN" |
| 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> | 3 | "http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"> |
| 4 | <chapter id="workflow"> | 4 | <chapter id="workflow"> |
| @@ -184,10 +184,14 @@ MODULE_LICENSE("GPL");</programlisting> | |||
| 184 | target and install/remove it:</para> | 184 | target and install/remove it:</para> |
| 185 | 185 | ||
| 186 | <programlisting># insmod hello.ko | 186 | <programlisting># insmod hello.ko |
| 187 | # rmmod hello.ko | 187 | # rmmod hello.ko</programlisting> |
| 188 | </programlisting> | ||
| 189 | </listitem> | 188 | </listitem> |
| 190 | </orderedlist> | 189 | </orderedlist> |
| 190 | |||
| 191 | <para>If you build a module using the SDK for development image, and | ||
| 192 | insert to the release image you will get error when running | ||
| 193 | <literal>rmmod</literal>: <programlisting>root@cn8304:~# rmmod hello.ko <<<<<<<<< | ||
| 194 | rmmod: ERROR: could not remove module hello.ko: Device or resource busy</programlisting></para> | ||
| 191 | </section> | 195 | </section> |
| 192 | 196 | ||
| 193 | <section id="deploy-artifacts"> | 197 | <section id="deploy-artifacts"> |
| @@ -300,11 +304,11 @@ MODULE_LICENSE("GPL");</programlisting> | |||
| 300 | 304 | ||
| 301 | <itemizedlist> | 305 | <itemizedlist> |
| 302 | <listitem> | 306 | <listitem> |
| 303 | <para>Kgdb â for kernel cross-debugging</para> | 307 | <para>Kgdb - for kernel cross-debugging</para> |
| 304 | </listitem> | 308 | </listitem> |
| 305 | 309 | ||
| 306 | <listitem> | 310 | <listitem> |
| 307 | <para>GDBServer â for application cross-debugging</para> | 311 | <para>GDBServer - for application cross-debugging</para> |
| 308 | </listitem> | 312 | </listitem> |
| 309 | </itemizedlist> | 313 | </itemizedlist> |
| 310 | 314 | ||
| @@ -448,16 +452,16 @@ ip route add default via 192.168.122.1 dev enp0s2</programlisting></para> | |||
| 448 | <listitem> | 452 | <listitem> |
| 449 | <para>On your development machine, start cross-gdb using the vmlinux | 453 | <para>On your development machine, start cross-gdb using the vmlinux |
| 450 | kernel image as a parameter. The image is located in | 454 | kernel image as a parameter. The image is located in |
| 451 | <filename><sdkdir>/sysroots/corei7-64-enea-linux/boot/</filename> | 455 | <filename><sdkdir>/sysroots/aarch64-enea-linux/boot/</filename> |
| 452 | and should be the same as the image found in the | 456 | and should be the same as the image found in the |
| 453 | <literal>/boot</literal> directory from the target.<programlisting>$ aarch64-enea-linux-gdb / | 457 | <literal>/boot</literal> directory from the target:<programlisting>$ aarch64-enea-linux-gdb \ |
| 454 | ./sysroots/aarch64-enea-linux/boot/ \ | 458 | ./sysroots/aarch64-enea-linux/boot/ \ |
| 455 | vmlinux-4.9.0-octeontx.sdk.6.1.0.p3.build.22-cavium-tiny</programlisting></para> | 459 | vmlinux-4.9.0-octeontx.sdk.6.1.0.p3.build.22-cavium-tiny</programlisting></para> |
| 456 | </listitem> | 460 | </listitem> |
| 457 | 461 | ||
| 458 | <listitem> | 462 | <listitem> |
| 459 | <para>Connect GDB to the target machine using target command and the | 463 | <para>Connect GDB to the target machine using the target command and |
| 460 | serial device:<programlisting>(gdb) set remotebaud 115200 | 464 | the serial device:<programlisting>(gdb) set remotebaud 115200 |
| 461 | (gdb) target remote /dev/ttyS0</programlisting></para> | 465 | (gdb) target remote /dev/ttyS0</programlisting></para> |
| 462 | </listitem> | 466 | </listitem> |
| 463 | </itemizedlist> | 467 | </itemizedlist> |
