1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
|
<?xml version="1.0" encoding="ISO-8859-1"?>
<chapter id="post_deploy_config">
<title>Post-Deploy Configurations</title>
<para>For running DPDK applications it is useful to isolate the available
cpus between the Linux kernel, ovs-dpdk and nova-compute.</para>
<para>All of the Hardware nodes can be accessed through ssh from the Fuel
console. Simply create an ssh connection to Fuel (e.g. root@10.20.0.2 pwd:
r00tme) and run the following command to get a list of the servers and the
IPs where they can be reached.</para>
<programlisting>[root@fuel ~]# fuel node
id | status | name | cluster | ip | mac | roles /
| pending_roles | online | group_id
---+--------+------------------+---------+-----------+-------------------+----------/
-----------------+---------------+--------+---------
4 | ready | Untitled (8c:c2) | 1 | 10.20.0.6 | 68:05:ca:46:8c:c2 | ceph-osd,/
compute | | 1 | 1
2 | ready | Untitled (8c:45) | 1 | 10.20.0.5 | 68:05:ca:46:8c:45 | controller,/
mongo, tacker | | 1 | 1
1 | ready | Untitled (8c:d4) | 1 | 10.20.0.4 | 68:05:ca:46:8c:d4 | ceph-osd,/
controller | | 1 | 1
5 | ready | Untitled (8c:c9) | 1 | 10.20.0.7 | 68:05:ca:46:8c:c9 | ceph-osd,/
compute | | 1 | 1
3 | ready | Untitled (8b:64) | 1 | 10.20.0.3 | 68:05:ca:46:8b:64 | controller,/
vitrage | | 1 | 1
[root@fuel ~]# | | 1 | 2
[root@fuel ~]# ssh node-3
Warning: Permanently added 'node-3' (ECDSA) to the list of known hosts.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Aug 24 19:40:06 2017 from 10.20.0.2
root@node-3:~#</programlisting>
<section id="cpu_isolation_config">
<title>CPU isolation configuration</title>
<para>It is a good idea to isolate the cores that will perform packet
processing and running qemu. The example below shows how to set isolcpus
on a compute node that has 1 x Intel Xeon, processor E5-2660 v4, 14 cores,
28 hyper-threaded cores.</para>
<programlisting>root@node-3:~# cat /etc/default/grub | head -n 10
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 4.4.50-rt62nfv"
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=" console=tty0 net.ifnames=1 biosdevname=0 rootdelay=90 /
nomodeset hugepagesz=2M hugepages=1536 isolcpus=10-47,58-95"
root@node-6:~# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz- 4.10.0-9924-generic
Found initrd image: /boot/initrd.img- 4.10.0-9924-generic
done
root@node-3:~# reboot
Connection to node-3 closed by remote host.
Connection to node-3 closed.</programlisting>
</section>
<section id="nova_config">
<title>Nova Compute configurations</title>
<para>In order to isolate the OpenStack instances on dedicated CPUs, nova
must be configured with vcpu_pin_set. Please refer to the Nova
configuration guide for more information.</para>
<para>The example below applies again to an Intel Xeon processor E5-2660
v4. Here the vcpu_pin_set is configured so that pair of thread siblings
are chosen.</para>
<programlisting>root@node-3:~# cat /etc/nova/nova.conf | grep vcpu_pin_set
vcpu_pin_set = "16-47,64-95"
root@node-3:~#</programlisting>
<para>After modifying nova configuration options on the Compute nodes, it
is necessary to restart nova-compute to put them into effect.</para>
<programlisting>root@node-3:~# systemctl restart nova-compute
root@node-3:~#</programlisting>
</section>
<section id="ovs_dpdk">
<title>OpenvSwitch with DPDK configuration</title>
<para>OPNFV Danube 1.0 comes with OpenvSwitch as the virtual switch
option. In the selected scenario, OpenvSwitch also uses DPDK for passing
traffic to and from the VMs.</para>
<para>One of the features that comes with OpenvSwitch v2.7.0 is the
ability to set pmd-cpu-mask. This effectively isolated userspace PMD
(poll-mode-drivers) on the specified set of CPUs.</para>
<para>By default, the OpenvSwitch that comes installed on the compute
nodes has no pmd-cpu-mask. There is an option to set it from the Fuel menu
before deploy, but it can always be manually set post-deploy as
follows:</para>
<programlisting>root@node-3:~# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=7e0
root@node-3:~# ovs-vsctl get Open_vSwitch . other_config:pmd-cpu-mask
"7e0"
root@node-3:~#</programlisting>
<para>No restart is required, OpenvSwitch automatically spawns new pmd
threads and sets the affinity as necessary.</para>
</section>
</chapter>
|