summaryrefslogtreecommitdiffstats
path: root/meta-openstack/Documentation
diff options
context:
space:
mode:
authorBruce Ashfield <bruce.ashfield@windriver.com>2014-05-23 23:49:49 -0400
committerBruce Ashfield <bruce.ashfield@windriver.com>2014-05-23 23:49:49 -0400
commit649327f80dc331943d448e87f73ecaadcc78a22a (patch)
tree2d640deedbc19b925f5539a31da26f2f7a6249c8 /meta-openstack/Documentation
parentfb1d6f23fa01c0217ed3f6778d8033dd0030db2a (diff)
downloadmeta-cloud-services-649327f80dc331943d448e87f73ecaadcc78a22a.tar.gz
docs: move more READMEs into Documentation
Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Diffstat (limited to 'meta-openstack/Documentation')
-rw-r--r--meta-openstack/Documentation/README.networking208
-rw-r--r--meta-openstack/Documentation/README.networking_flat249
-rw-r--r--meta-openstack/Documentation/README.networking_l3_router450
-rw-r--r--meta-openstack/Documentation/README.networking_vlan382
-rw-r--r--meta-openstack/Documentation/README.spice82
-rw-r--r--meta-openstack/Documentation/README.tempest55
6 files changed, 1426 insertions, 0 deletions
diff --git a/meta-openstack/Documentation/README.networking b/meta-openstack/Documentation/README.networking
new file mode 100644
index 0000000..2299de3
--- /dev/null
+++ b/meta-openstack/Documentation/README.networking
@@ -0,0 +1,208 @@
1Networking
2==============
3
4Description
5-----------
6OpenStack provides tools to setup many different network topologies using
7tunnels, Vlans, GREs... the list goes on. In this document we describe how to
8setup 3 basic network configurations which can be used as building blocks for a
9larger network deployment. Going through these setups also tests that the
10Open vSwitch plugin and DHCP and l3 agents are operating correctly.
11
12
13Assumptions
14-----------
15The following assumes you have built the controller and compute nodes for the
16qemux86-64 machine as described in README.setup and have been able to spin-up an
17instance successfully.
18
19
20Prerequisites
21-------------
22
231. Following the instructions in README.setup to spin-up your controller and
24compute nodes in VMs will result in NATed tap interfaces on the host. While this
25is fine for basic use it will not allow you to use things like GRE tunnels as
26the packet will appear to be coming from the host when it arrives at the other
27end of the tunnel and will therefore be rejected (since the src IP will not
28match the GRE's remote_ip). To get around this we must setup an Open vSwitch
29bridge on the host and attach the taps. Open vSwitch must therefore be installed
30and running on the host.
31
32On Ubuntu systems this may be done via:
33sudo apt-get install openvswitch-switch openvswitch-common
34
352. Also since we will be using an Open vSwitch on the host we need to ensure the
36controller and compute network interfaces have different MAC addresses. We
37therefor must modify the runqemu script as per the following:
38
39--- a/scripts/runqemu-internal
40+++ b/scripts/runqemu-internal
41@@ -252,7 +252,7 @@ else
42 KERNEL_NETWORK_CMD="ip=192.168.7.$n2::192.168.7.$n1:255.255.255.0"
43 QEMU_TAP_CMD="-net tap,vlan=0,ifname=$TAP,script=no,downscript=no"
44 if [ "$KVM_ACTIVE" = "yes" ]; then
45- QEMU_NETWORK_CMD="-net nic,model=virtio $QEMU_TAP_CMD,vhost=on"
46+ QEMU_NETWORK_CMD="-net nic,macaddr=52:54:00:12:34:$(printf '%x' $((RANDOM % 170))),model=virtio $QEMU_TAP_CMD,vhost=on"
47 DROOT="/dev/vda"
48 ROOTFS_OPTIONS="-drive file=$ROOTFS,if=virtio"
49 else
50---
51this will not guarantee distinct MAC addresses but most of the time they will be.
52
53
54Host Open vSwitch bridge
55------------------------
56As per the prerequisites we need to setup a bridge on the host to avoid NATed
57tap interfaces. After you have used 'runqemu' to boot your controller and
58compute nodes perform the following instructions on your host
59
60(I will assume tap0 - controller, tap1 - compute, use 'ip a s' or 'ifconfig' to
61identify the tap interfaces)
62
63sudo ovs-vsctl add-br br-int
64sudo ovs-vsctl add-port br-int tap0
65sudo ovs-vsctl add-port br-int tap1
66sudo ip address del 192.168.7.1/24 dev tap0
67sudo ip address del 192.168.7.3/24 dev tap1
68sudo ip address add 192.168.7.1/24 broadcast 192.168.7.255 dev br-int
69sudo route del 192.168.7.2 tap0
70sudo route del 192.168.7.4 tap1
71
72
73NOTE: Any time you reboot the controller or compute nodes you will
74want to remove and re-add the port via:
75# ovs-vsctl del-port br-int tapX
76# ovs-vsctl add-port br-int tapX
77# ip address del 192.168.7.Y/24 dev tapX
78(where X and Y are substituted accordingly)
79This will also ensure the ARP tables in the vSwitch are updated since
80chances are the MAC address will have changed on a reboot due to the
81MAC randomizer of prerequisite 2.
82
83
84Controller/Compute network setup
85--------------------------------
86The neutron Open vSwitch plugin expects several bridges to exist on
87the controller and compute nodes. When the controller and compute
88nodes are first booted however these do not exist and depending on how
89you are setting up your network this is subject to change and as such
90is not 'baked' in to our images. This would normally be setup by
91cloud-init, chef, cobbler or some other deployment scripts. Here we
92will accomplish it by hand.
93
94On first boot your network will look like this: (controller node)
95---snip---
96root@controller:~# ip a show eth0
972: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
98 link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff
99 inet 192.168.7.2/24 brd 192.168.7.255 scope global eth0
100 valid_lft forever preferred_lft forever
101 inet6 fe80::5054:ff:fe12:34a9/64 scope link
102 valid_lft forever preferred_lft forever
103
104root@controller:~# ovs-vsctl show
105524a6c84-226d-427b-8efa-732ed7e7fa43
106 Bridge br-int
107 Port patch-tun
108 Interface patch-tun
109 type: patch
110 options: {peer=patch-int}
111 Port br-int
112 Interface br-int
113 type: internal
114 Bridge br-tun
115 Port br-tun
116 Interface br-tun
117 type: internal
118 Port patch-int
119 Interface patch-int
120 type: patch
121 options: {peer=patch-tun}
122 ovs_version: "2.0.0"
123---snip---
124
125To complete the expected network configuration you must add a bridge
126which will contain the physical interface as one of its ports and move
127the IP address from the interface to the bridge. The following will
128accomplish this:
129
130ovs-vsctl add-br br-eth0
131ovs-vsctl add-port br-eth0 eth0
132ip address del 192.168.7.2/24 dev eth0
133ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0
134route add default gw 192.168.7.1
135
136And now you network will look like the following:
137---snip---
138root@controller:~# ip a s
139...skip
1402: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
141 link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff
142...skip
1437: br-eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
144 link/ether ae:f8:be:7c:78:42 brd ff:ff:ff:ff:ff:ff
145 inet 192.168.7.2/24 scope global br-eth0
146 valid_lft forever preferred_lft forever
147 inet6 fe80::e453:1fff:fec1:79ff/64 scope link
148 valid_lft forever preferred_lft forever
149
150root@controller:~# ovs-vsctl show
151524a6c84-226d-427b-8efa-732ed7e7fa43
152 Bridge "br-eth0"
153 Port "eth0"
154 Interface "eth0"
155 Port "br-eth0"
156 Interface "br-eth0"
157 type: internal
158 Bridge br-int
159 Port patch-tun
160 Interface patch-tun
161 type: patch
162 options: {peer=patch-int}
163 Port br-int
164 Interface br-int
165 type: internal
166 Bridge br-tun
167 Port br-tun
168 Interface br-tun
169 type: internal
170 Port patch-int
171 Interface patch-int
172 type: patch
173 options: {peer=patch-tun}
174 ovs_version: "2.0.0"
175
176At this point you will want to restart the neutron network services
177
178(controller)
179/etc/init.d/neutron-openvswitch-agent stop
180/etc/init.d/neutron-dhcp-agent stop
181/etc/init.d/neutron-server reload
182/etc/init.d/neutron-dhcp-agent start
183/etc/init.d/neutron-openvswitch-agent start
184
185(Compute)
186/etc/init.d/neutron-openvswitch-agent stop
187/etc/init.d/nova-compute reload
188/etc/init.d/neutron-openvswitch-agent start
189
190
191NOTE: on a reboot the Open vSwitch configuration will remain but at
192this point in time you will need to manually move the IP address from
193the eth0 interface to the br-eth0 interface using
194
195ip address del 192.168.7.2/24 dev eth0
196ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0
197
198With this network configuration on the controller and similar
199configuration on the compute node (just replace 192.168.7.2 with
200192.168.7.4) everything is ready to configure any of the 3 network
201sample configurations.
202
203Further reading
204---------------
205
206README.networking_flat
207README.networking_vlan
208README.networking_l3_router \ No newline at end of file
diff --git a/meta-openstack/Documentation/README.networking_flat b/meta-openstack/Documentation/README.networking_flat
new file mode 100644
index 0000000..ab18f6f
--- /dev/null
+++ b/meta-openstack/Documentation/README.networking_flat
@@ -0,0 +1,249 @@
1Networking - FLAT network
2=========================
3
4Description
5-----------
6The flat network will have the VMs share the management network
7(192.168.7.0/24). The dhcp-agent will provide the VMs addresses
8within the subnet and within its provisioned range. This type of
9network will not typically be deployed as everything is accessible by
10everything else (VMs can access VMs and the compute and controller
11nodes)
12
13
14Assumptions
15-----------
16It is assumed you have completed the steps described in
17README.networking and have provisioned the host vSwitch as well as
18created the br-eth0 bridges on the controller and compute nodes.
19
20At this point you should be able to ping 192.168.7.4 from 192.168.7.4
21and vise versa.
22
23You have built your controller image including the cirros image (for
24which you have already added the image to glance as myFirstImage).
25
26You have run 'source /etc/nova/openrc'
27
28Configuration updates
29---------------------
30On the controller and (all) compute nodes you must edit the file
31/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
32
33In the [OVS] section set
34network_vlan_ranges = ph-eth0:1:1
35bridge_mappings = ph-eth0:br-eth0
36
37(*** on compute nodes edit local_ip as well [192.168.7.4]***)
38
39Restart some services to allow these changes to take effect:
40/etc/init.d/neutron-openvswitch-agent reload
41(on controller)
42/etc/init.d/neutron-server reload
43/etc/init.d/neutron-dhcp-agent reload
44(on compute)
45/etc/init.d/nova-compute reload
46
47
48Create the net and subnet
49-------------------------
50neutron net-create --provider:physical_network=ph-eth0 \
51 --provider:network_type=flat \
52 --shared MY_FLAT_NET
53Created a new network:
54+---------------------------+--------------------------------------+
55| Field | Value |
56+---------------------------+--------------------------------------+
57| admin_state_up | True |
58| id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 |
59| name | MY_FLAT_NET |
60| provider:network_type | flat |
61| provider:physical_network | ph-eth0 |
62| provider:segmentation_id | |
63| shared | True |
64| status | ACTIVE |
65| subnets | |
66| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
67+---------------------------+--------------------------------------+
68
69
70neutron subnet-create MY_FLAT_NET 192.168.7.0/24 --name MY_FLAT_SUBNET \
71 --no-gateway --host-route destination=0.0.0.0/0,nexthop=192.168.7.1 \
72 --allocation-pool start=192.168.7.230,end=192.168.7.234
73Created a new subnet:
74+------------------+--------------------------------------------------------+
75| Field | Value |
76+------------------+--------------------------------------------------------+
77| allocation_pools | {"start": "192.168.7.230", "end": "192.168.7.234"} |
78| cidr | 192.168.7.0/24 |
79| dns_nameservers | |
80| enable_dhcp | True |
81| gateway_ip | |
82| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.7.1"} |
83| id | bfa99d99-2ba5-47e9-b71e-0bd8a2961e08 |
84| ip_version | 4 |
85| name | MY_FLAT_SUBNET |
86| network_id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 |
87| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
88+------------------+--------------------------------------------------------+
89
90Boot the image and test connectivity
91------------------------------------
92nova boot --image myFirstImage --flavor m1.small \
93 --nic net-id=3263aa7f-b86c-4ad3-a28c-c78d4c711583 myinstance
94+--------------------------------------+-----------------------------------------------------+
95| Property | Value |
96+--------------------------------------+-----------------------------------------------------+
97| OS-DCF:diskConfig | MANUAL |
98| OS-EXT-AZ:availability_zone | nova |
99| OS-EXT-SRV-ATTR:host | - |
100| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
101| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
102| OS-EXT-STS:power_state | 0 |
103| OS-EXT-STS:task_state | scheduling |
104| OS-EXT-STS:vm_state | building |
105| OS-SRV-USG:launched_at | - |
106| OS-SRV-USG:terminated_at | - |
107| accessIPv4 | |
108| accessIPv6 | |
109| adminPass | 7Qe9nFekCjYD |
110| config_drive | |
111| created | 2014-04-10T04:13:38Z |
112| flavor | m1.small (2) |
113| hostId | |
114| id | f85da1da-c318-49fb-8da9-c07644400d4c |
115| image | myFirstImage (1da089b1-164d-45d6-9b6c-002f3edb8a7b) |
116| key_name | - |
117| metadata | {} |
118| name | myinstance |
119| os-extended-volumes:volumes_attached | [] |
120| progress | 0 |
121| security_groups | default |
122| status | BUILD |
123| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
124| updated | 2014-04-10T04:13:38Z |
125| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
126+--------------------------------------+-----------------------------------------------------+
127
128nova list
129+--------------------------------------+------------+--------+------------+-------------+---------------------------+
130| ID | Name | Status | Task State | Power State | Networks |
131+--------------------------------------+------------+--------+------------+-------------+---------------------------+
132| f85da1da-c318-49fb-8da9-c07644400d4c | myinstance | ACTIVE | - | Running | MY_FLAT_NET=192.168.7.231 |
133+--------------------------------------+------------+--------+------------+-------------+---------------------------+
134
135nova console-log myinstance
136---
137...skip
138Starting logging: OK
139Initializing random number generator... done.
140Starting network...
141udhcpc (v1.18.5) started
142Sending discover...
143Sending select for 192.168.7.231...
144Lease of 192.168.7.231 obtained, lease time 86400
145deleting routers
146...skip
147
148ping
149---
150root@controller:~# ping -c 1 192.168.7.231
151PING 192.168.7.231 (192.168.7.231) 56(84) bytes of data.
15264 bytes from 192.168.7.231: icmp_seq=1 ttl=64 time=2.98 ms
153
154--- 192.168.7.231 ping statistics ---
1551 packets transmitted, 1 received, 0% packet loss, time 0ms
156rtt min/avg/max/mdev = 2.988/2.988/2.988/0.000 ms
157
158You should also be able to ping the compute or controller or other VMs
159(if you start them) from within a VM. Pinging targets outside the
160subnet requires that you ensure the various interfaces, such as eth0
161have promisc on 'ip link set eth0 promisc on'
162
163The final Open vSwitch configs
164------------------------------
165
166Controller
167---
168root@controller:~# ovs-vsctl show
169524a6c84-226d-427b-8efa-732ed7e7fa43
170 Bridge "br-eth0"
171 Port "eth0"
172 Interface "eth0"
173 Port "br-eth0"
174 Interface "br-eth0"
175 type: internal
176 Port "phy-br-eth0"
177 Interface "phy-br-eth0"
178 Bridge br-int
179 Port "tap549fb0c7-1a"
180 tag: 1
181 Interface "tap549fb0c7-1a"
182 type: internal
183 Port "int-br-eth0"
184 Interface "int-br-eth0"
185 Port patch-tun
186 Interface patch-tun
187 type: patch
188 options: {peer=patch-int}
189 Port br-int
190 Interface br-int
191 type: internal
192 Bridge br-tun
193 Port "gre-2"
194 Interface "gre-2"
195 type: gre
196 options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"}
197 Port br-tun
198 Interface br-tun
199 type: internal
200 Port patch-int
201 Interface patch-int
202 type: patch
203 options: {peer=patch-tun}
204 ovs_version: "2.0.0"
205
206
207Compute
208---
209root@compute:~# ovs-vsctl show
21099d365d2-f74e-40a8-b9a0-5bb60353675d
211 Bridge br-tun
212 Port "gre-1"
213 Interface "gre-1"
214 type: gre
215 options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"}
216 Port br-tun
217 Interface br-tun
218 type: internal
219 Port patch-int
220 Interface patch-int
221 type: patch
222 options: {peer=patch-tun}
223 Bridge br-int
224 Port br-int
225 Interface br-int
226 type: internal
227 Port "int-br-eth0"
228 Interface "int-br-eth0"
229 Port patch-tun
230 Interface patch-tun
231 type: patch
232 options: {peer=patch-int}
233 Port "tap93a74250-ef"
234 tag: 1
235 Interface "tap93a74250-ef"
236 Bridge "br-eth0"
237 Port "phy-br-eth0"
238 Interface "phy-br-eth0"
239 Port "eth0"
240 Interface "eth0"
241 Port "br-eth0"
242 Interface "br-eth0"
243 type: internal
244 ovs_version: "2.0.0"
245
246
247References
248----------
249http://developer.rackspace.com/blog/neutron-networking-simple-flat-network.html \ No newline at end of file
diff --git a/meta-openstack/Documentation/README.networking_l3_router b/meta-openstack/Documentation/README.networking_l3_router
new file mode 100644
index 0000000..a16f8c4
--- /dev/null
+++ b/meta-openstack/Documentation/README.networking_l3_router
@@ -0,0 +1,450 @@
1Networking - l3 router
2=========================
3
4Description
5-----------
6Using provider networks (such as we did for flat and vlan usecases)
7does not scale to large deployments, their downsides become quickly
8apparent. The l3-agent provides the ability to create routers that can
9handle routing between directly connected LAN interfaces and a single
10WAN interface.
11
12Here we setup a virtual router with a connection to a provider network
13(vlan) and 2 attached subnets. We don't use floating IPs for this
14demo.
15
16
17Assumptions
18-----------
19It is assumed you have completed the steps described in
20README.networking and have provisioned the host vSwitch as well as
21created the br-eth0 bridges on the controller and compute nodes.
22
23At this point you should be able to ping 192.168.7.4 from 192.168.7.4
24and vise versa.
25
26You have built your controller image including the cirros image (for
27which you have already added the image to glance as myFirstImage).
28
29You have run 'source /etc/nova/openrc'
30
31Configuration updates
32---------------------
33On the host Open vSwitch add an IP for 192.168.100.1/22
34sudo ip address add 192.168.100.1/22 broadcast 192.168.255.255 dev br-int
35
36On the controller and (all) compute nodes you must edit the file
37/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
38
39In the [OVS] section set
40network_vlan_ranges = ph-eth0:1998:1998
41bridge_mappings = ph-eth0:br-eth0
42
43(*** on compute nodes edit local_ip as well [192.168.7.4]***)
44
45Restart some services to allow these changes to take effect:
46/etc/init.d/neutron-openvswitch-agent reload
47(on controller)
48/etc/init.d/neutron-server reload
49/etc/init.d/neutron-dhcp-agent reload
50(on compute)
51/etc/init.d/nova-compute reload
52
53
54** edit /etc/neutron/l3-agent.ini
55use_namespaces = True
56external_network_bridge =
57
58/etc/init.d/neutron-l3-agent restart
59
60
61Create the provider network
62---------------------------
63neutron net-create --provider:physical_network=ph-eth0 \
64 --provider:network_type=vlan --provider:segmentation_id=1998 \
65 --shared --router:external=true GATEWAY_NET
66
67neutron subnet-create GATEWAY_NET 192.168.100.0/22 \
68 --name GATEWAY_SUBNET --gateway=192.168.100.1 \
69 --allocation-pool start=192.168.101.1,end=192.168.103.254
70
71
72Create the router
73-----------------
74neutron router-create NEUTRON-ROUTER
75Created a new router:
76+-----------------------+--------------------------------------+
77| Field | Value |
78+-----------------------+--------------------------------------+
79| admin_state_up | True |
80| external_gateway_info | |
81| id | b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 |
82| name | NEUTRON-ROUTER |
83| status | ACTIVE |
84| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
85+-----------------------+--------------------------------------+
86
87neutron router-gateway-set NEUTRON-ROUTER GATEWAY_NET
88Set gateway for router NEUTRON-ROUTER
89
90Inspect the created network namespaces
91--------------------------------------
92root@controller:~# ip netns
93qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91
94qdhcp-498fa1f2-87de-4874-8ca9-f4ba3e394d2a
95
96ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a
971: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
98 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
99 inet 127.0.0.1/8 scope host lo
100 valid_lft forever preferred_lft forever
101 inet6 ::1/128 scope host
102 valid_lft forever preferred_lft forever
1032: sit0: <NOARP> mtu 1480 qdisc noop state DOWN group default
104 link/sit 0.0.0.0 brd 0.0.0.0
10520: qg-19f6d85f-a6: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
106 link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff
107 inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6
108 valid_lft forever preferred_lft forever
109 inet6 fe80::f816:3eff:feb8:1e9d/64 scope link
110 valid_lft forever preferred_lft forever
111
112
113Attach tenant networks
114----------------------
115neutron net-create --provider:network_type=gre --provider:segmentation_id=10 \
116 --shared APPS_NET
117Created a new network:
118+---------------------------+--------------------------------------+
119| Field | Value |
120+---------------------------+--------------------------------------+
121| admin_state_up | True |
122| id | 52f4549f-aeed-4fcf-997b-4349f591cd5f |
123| name | APPS_NET |
124| provider:network_type | gre |
125| provider:physical_network | |
126| provider:segmentation_id | 10 |
127| shared | True |
128| status | ACTIVE |
129| subnets | |
130| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
131+---------------------------+--------------------------------------+
132
133neutron net-create --provider:network_type=gre --provider:segmentation_id=20 \
134 --shared DMZ_NET
135Created a new network:
136+---------------------------+--------------------------------------+
137| Field | Value |
138+---------------------------+--------------------------------------+
139| admin_state_up | True |
140| id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 |
141| name | DMZ_NET |
142| provider:network_type | gre |
143| provider:physical_network | |
144| provider:segmentation_id | 20 |
145| shared | True |
146| status | ACTIVE |
147| subnets | |
148| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
149+---------------------------+--------------------------------------+
150
151neutron subnet-create APPS_NET 10.241.0.0/22 --name APPS_SUBNET
152Created a new subnet:
153+------------------+------------------------------------------------+
154| Field | Value |
155+------------------+------------------------------------------------+
156| allocation_pools | {"start": "10.241.0.2", "end": "10.241.3.254"} |
157| cidr | 10.241.0.0/22 |
158| dns_nameservers | |
159| enable_dhcp | True |
160| gateway_ip | 10.241.0.1 |
161| host_routes | |
162| id | 45e7d887-1c4c-485a-9247-2a2bec9e3714 |
163| ip_version | 4 |
164| name | APPS_SUBNET |
165| network_id | 52f4549f-aeed-4fcf-997b-4349f591cd5f |
166| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
167+------------------+------------------------------------------------+
168
169neutron subnet-create DMZ_NET 10.242.0.0/22 --name DMZ_SUBNET
170Created a new subnet:
171+------------------+------------------------------------------------+
172| Field | Value |
173+------------------+------------------------------------------------+
174| allocation_pools | {"start": "10.242.0.2", "end": "10.242.3.254"} |
175| cidr | 10.242.0.0/22 |
176| dns_nameservers | |
177| enable_dhcp | True |
178| gateway_ip | 10.242.0.1 |
179| host_routes | |
180| id | 2deda040-be04-432b-baa6-3a2219d22f20 |
181| ip_version | 4 |
182| name | DMZ_SUBNET |
183| network_id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 |
184| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
185+------------------+------------------------------------------------+
186
187neutron router-interface-add NEUTRON-ROUTER APPS_SUBNET
188Added interface 58f3db35-f5df-4fd1-9735-4ff13dd342de to router NEUTRON-ROUTER.
189
190neutron router-interface-add NEUTRON-ROUTER DMZ_SUBNET
191Added interface 9252ec29-7aac-4550-821c-f910f10680cf to router NEUTRON-ROUTER.
192
193ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a
1941: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
195 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
196 inet 127.0.0.1/8 scope host lo
197 valid_lft forever preferred_lft forever
198 inet6 ::1/128 scope host
199 valid_lft forever preferred_lft forever
2002: sit0: <NOARP> mtu 1480 qdisc noop state DOWN group default
201 link/sit 0.0.0.0 brd 0.0.0.0
20220: qg-19f6d85f-a6: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
203 link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff
204 inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6
205 valid_lft forever preferred_lft forever
206 inet6 fe80::f816:3eff:feb8:1e9d/64 scope link
207 valid_lft forever preferred_lft forever
20821: qr-58f3db35-f5: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
209 link/ether fa:16:3e:76:ec:23 brd ff:ff:ff:ff:ff:ff
210 inet 10.241.0.1/22 brd 10.241.3.255 scope global qr-58f3db35-f5
211 valid_lft forever preferred_lft forever
212 inet6 fe80::f816:3eff:fe76:ec23/64 scope link
213 valid_lft forever preferred_lft forever
21422: qr-9252ec29-7a: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
215 link/ether fa:16:3e:fb:98:06 brd ff:ff:ff:ff:ff:ff
216 inet 10.242.0.1/22 brd 10.242.3.255 scope global qr-9252ec29-7a
217 valid_lft forever preferred_lft forever
218 inet6 fe80::f816:3eff:fefb:9806/64 scope link
219 valid_lft forever preferred_lft forever
220
221Note the two new interfaces.
2221 connection to the provider network
2232 connections to the subnets (1 to APPS_SUBNET, 1 to DMZ_SUBNET)
224
225Boot an instance
226---------------
227nova boot --flavor=m1.small --image=myFirstImage \
228 --nic net-id=52f4549f-aeed-4fcf-997b-4349f591cd5f APPS_INSTANCE
229+--------------------------------------+-----------------------------------------------------+
230| Property | Value |
231+--------------------------------------+-----------------------------------------------------+
232| OS-DCF:diskConfig | MANUAL |
233| OS-EXT-AZ:availability_zone | nova |
234| OS-EXT-SRV-ATTR:host | - |
235| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
236| OS-EXT-SRV-ATTR:instance_name | instance-0000000e |
237| OS-EXT-STS:power_state | 0 |
238| OS-EXT-STS:task_state | scheduling |
239| OS-EXT-STS:vm_state | building |
240| OS-SRV-USG:launched_at | - |
241| OS-SRV-USG:terminated_at | - |
242| accessIPv4 | |
243| accessIPv6 | |
244| adminPass | jdLkr4i6ATvQ |
245| config_drive | |
246| created | 2014-04-10T16:27:31Z |
247| flavor | m1.small (2) |
248| hostId | |
249| id | fc849bb9-54d3-4a9a-99a4-6346a6eef404 |
250| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) |
251| key_name | - |
252| metadata | {} |
253| name | APPS_INSTANCE |
254| os-extended-volumes:volumes_attached | [] |
255| progress | 0 |
256| security_groups | default |
257| status | BUILD |
258| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
259| updated | 2014-04-10T16:27:31Z |
260| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
261+--------------------------------------+-----------------------------------------------------+
262
263nova boot --flavor=m1.small --image=myFirstImage \
264 --nic net-id=eeb07b09-4b4a-4c2c-9060-0b8e414a9279 DMZ_INSTANCE
265+--------------------------------------+-----------------------------------------------------+
266| Property | Value |
267+--------------------------------------+-----------------------------------------------------+
268| OS-DCF:diskConfig | MANUAL |
269| OS-EXT-AZ:availability_zone | nova |
270| OS-EXT-SRV-ATTR:host | - |
271| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
272| OS-EXT-SRV-ATTR:instance_name | instance-0000000f |
273| OS-EXT-STS:power_state | 0 |
274| OS-EXT-STS:task_state | scheduling |
275| OS-EXT-STS:vm_state | building |
276| OS-SRV-USG:launched_at | - |
277| OS-SRV-USG:terminated_at | - |
278| accessIPv4 | |
279| accessIPv6 | |
280| adminPass | 4d7UsUJhSpBd |
281| config_drive | |
282| created | 2014-04-10T16:29:25Z |
283| flavor | m1.small (2) |
284| hostId | |
285| id | f281c349-d49c-4d6c-bf56-74f04f2e8aec |
286| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) |
287| key_name | - |
288| metadata | {} |
289| name | DMZ_INSTANCE |
290| os-extended-volumes:volumes_attached | [] |
291| progress | 0 |
292| security_groups | default |
293| status | BUILD |
294| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
295| updated | 2014-04-10T16:29:25Z |
296| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
297+--------------------------------------+-----------------------------------------------------+
298
299Check connectivity
300------------------
301nova console-log APPS_INSTANCE
302...skip
303Starting network...
304udhcpc (v1.18.5) started
305Sending discover...
306Sending select for 10.241.0.2...
307Lease of 10.241.0.2 obtained, lease time 86400
308..skip
309
310nova console-log DMZ_INSTANCE
311...skip
312Starting network...
313udhcpc (v1.18.5) started
314Sending discover...
315Sending select for 10.242.0.2...
316Lease of 10.242.0.2 obtained, lease time 86400
317...skip
318
319root@controller:~# nova list
320+--------------------------------------+---------------+--------+------------+-------------+---------------------+
321| ID | Name | Status | Task State | Power State | Networks |
322+--------------------------------------+---------------+--------+------------+-------------+---------------------+
323| fc849bb9-54d3-4a9a-99a4-6346a6eef404 | APPS_INSTANCE | ACTIVE | - | Running | APPS_NET=10.241.0.2 |
324| f281c349-d49c-4d6c-bf56-74f04f2e8aec | DMZ_INSTANCE | ACTIVE | - | Running | DMZ_NET=10.242.0.2 |
325+--------------------------------------+---------------+--------+------------+-------------+---------------------+
326
327
328ping
329---
330Since we are not using floating IPs you will only be able ping from inside the route namespace
331
332# ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 \
333 ping 10.241.0.2 -c 1
334PING 10.241.0.2 (10.241.0.2) 56(84) bytes of data.
33564 bytes from 10.241.0.2: icmp_seq=1 ttl=64 time=6.32 ms
336
337--- 10.241.0.2 ping statistics ---
3381 packets transmitted, 1 received, 0% packet loss, time 0ms
339rtt min/avg/max/mdev = 6.328/6.328/6.328/0.000 ms
340
341# ping 10.241.0.2 -c 1
342connect: Network is unreachable
343
344
345The final Open vSwitch configs
346------------------------------
347
348Controller
349---
350root@controller:~# ovs-vsctl show
351524a6c84-226d-427b-8efa-732ed7e7fa43
352 Bridge "br-eth0"
353 Port "eth0"
354 Interface "eth0"
355 Port "br-eth0"
356 Interface "br-eth0"
357 type: internal
358 Port "phy-br-eth0"
359 Interface "phy-br-eth0"
360 Bridge br-tun
361 Port patch-int
362 Interface patch-int
363 type: patch
364 options: {peer=patch-tun}
365 Port "gre-2"
366 Interface "gre-2"
367 type: gre
368 options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"}
369 Port br-tun
370 Interface br-tun
371 type: internal
372 Bridge br-int
373 Port "qr-58f3db35-f5"
374 tag: 2
375 Interface "qr-58f3db35-f5"
376 type: internal
377 Port "tap6e65f2e5-39"
378 tag: 3
379 Interface "tap6e65f2e5-39"
380 type: internal
381 Port "qr-9252ec29-7a"
382 tag: 3
383 Interface "qr-9252ec29-7a"
384 type: internal
385 Port "int-br-eth0"
386 Interface "int-br-eth0"
387 Port patch-tun
388 Interface patch-tun
389 type: patch
390 options: {peer=patch-int}
391 Port "tapcf2a0e68-6b"
392 tag: 2
393 Interface "tapcf2a0e68-6b"
394 type: internal
395 Port br-int
396 Interface br-int
397 type: internal
398 Port "qg-19f6d85f-a6"
399 tag: 1
400 Interface "qg-19f6d85f-a6"
401 type: internal
402 ovs_version: "2.0.0"
403
404
405Compute
406---
407root@compute:~# ovs-vsctl show
40899d365d2-f74e-40a8-b9a0-5bb60353675d
409 Bridge br-int
410 Port br-int
411 Interface br-int
412 type: internal
413 Port patch-tun
414 Interface patch-tun
415 type: patch
416 options: {peer=patch-int}
417 Port "tapc2db0bfa-ae"
418 tag: 1
419 Interface "tapc2db0bfa-ae"
420 Port "tap57fae225-16"
421 tag: 2
422 Interface "tap57fae225-16"
423 Port "int-br-eth0"
424 Interface "int-br-eth0"
425 Bridge "br-eth0"
426 Port "eth0"
427 Interface "eth0"
428 Port "phy-br-eth0"
429 Interface "phy-br-eth0"
430 Port "br-eth0"
431 Interface "br-eth0"
432 type: internal
433 Bridge br-tun
434 Port br-tun
435 Interface br-tun
436 type: internal
437 Port "gre-1"
438 Interface "gre-1"
439 type: gre
440 options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"}
441 Port patch-int
442 Interface patch-int
443 type: patch
444 options: {peer=patch-tun}
445 ovs_version: "2.0.0"
446
447
448References
449----------
450http:// developer.rackspace.com/blog/neutron-networking-l3-agent.html \ No newline at end of file
diff --git a/meta-openstack/Documentation/README.networking_vlan b/meta-openstack/Documentation/README.networking_vlan
new file mode 100644
index 0000000..6d48e2b
--- /dev/null
+++ b/meta-openstack/Documentation/README.networking_vlan
@@ -0,0 +1,382 @@
1Networking - VLAN network
2=========================
3
4Description
5-----------
6The vlan network will have the VMs on one of two vlan networks
7(DMZ_SUBNET - 172.16.0.0/24, INSIDE_SUBNET - 192.168.100.0/241). We
8will continue to use the management network (192.168.7.0/24) for
9controller/compute communications. The dhcp-agent will provide the VMs
10addresses within each subnet and within its provisioned ranges. This
11type of network is more typical of a deployed network since network
12traffic can be contained to within the assigned vlan.
13
14
15Assumptions
16-----------
17It is assumed you have completed the steps described in
18README.networking and have provisioned the host vSwitch as well as
19created the br-eth0 bridges on the controller and compute nodes.
20
21At this point you should be able to ping 192.168.7.4 from 192.168.7.4
22and vise versa.
23
24You have built your controller image including the cirros image (for
25which you have already added the image to glance as myFirstImage).
26
27You have run 'source /etc/nova/openrc'
28
29Configuration updates
30---------------------
31On the host Open vSwitch add an IP for 192.168.100.1/22
32sudo ip address add 192.168.100.1/24 broadcast 192.168.100.255 dev br-int
33sudo ip address add 172.16.0.1/24 broadcast 172.16.0.255 dev br-int
34
35On the controller and (all) compute nodes you must edit the file
36/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
37
38In the [OVS] section set
39network_vlan_ranges = ph-eth0:200:200,ph-eth0:300:300
40bridge_mappings = ph-eth0:br-eth0
41
42(*** on compute nodes edit local_ip as well [192.168.7.4]***)
43
44Restart some services to allow these changes to take effect:
45/etc/init.d/neutron-openvswitch-agent reload
46(on controller)
47/etc/init.d/neutron-server reload
48/etc/init.d/neutron-dhcp-agent reload
49(on compute)
50/etc/init.d/nova-compute reload
51
52
53Create the net and subnet
54-------------------------
55neutron net-create --provider:physical_network=ph-eth0 \
56 --provider:network_type=vlan --provider:segmentation_id=200 \
57 --shared INSIDE_NET
58Created a new network:
59+---------------------------+--------------------------------------+
60| Field | Value |
61+---------------------------+--------------------------------------+
62| admin_state_up | True |
63| id | 587e29d0-eb89-4c0d-948b-845009380097 |
64| name | INSIDE_NET |
65| provider:network_type | vlan |
66| provider:physical_network | ph-eth0 |
67| provider:segmentation_id | 200 |
68| shared | True |
69| status | ACTIVE |
70| subnets | |
71| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
72+---------------------------+--------------------------------------+
73
74neutron net-create --provider:physical_network=ph-eth0 \
75 --provider:network_type=vlan --provider:segmentation_id=300 \
76 --shared DMZ_NET
77Created a new network:
78+---------------------------+--------------------------------------+
79| Field | Value |
80+---------------------------+--------------------------------------+
81| admin_state_up | True |
82| id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a |
83| name | DMZ_NET |
84| provider:network_type | vlan |
85| provider:physical_network | ph-eth0 |
86| provider:segmentation_id | 300 |
87| shared | True |
88| status | ACTIVE |
89| subnets | |
90| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
91+---------------------------+--------------------------------------+
92
93neutron subnet-create INSIDE_NET 192.168.100.0/24 \
94 --name INSIDE_SUBNET --no-gateway \
95 --host-route destination=0.0.0.0/0,nexthop=192.168.100.1 \
96 --allocation-pool start=192.168.100.100,end=192.168.100.199
97Created a new subnet:
98+------------------+----------------------------------------------------------+
99| Field | Value |
100+------------------+----------------------------------------------------------+
101| allocation_pools | {"start": "192.168.100.100", "end": "192.168.100.199"} |
102| cidr | 192.168.100.0/24 |
103| dns_nameservers | |
104| enable_dhcp | True |
105| gateway_ip | |
106| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.100.1"} |
107| id | 2c1a77aa-614c-4a97-9855-a62bb4b4d899 |
108| ip_version | 4 |
109| name | INSIDE_SUBNET |
110| network_id | 587e29d0-eb89-4c0d-948b-845009380097 |
111| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
112+------------------+----------------------------------------------------------+
113
114neutron subnet-create DMZ_NET 172.16.0.0/24 --name DMZ_SUBNET \
115 --no-gateway --host-route destination=0.0.0.0/0,nexthop=172.16.0.1 \
116 --allocation-pool start=172.16.0.100,end=172.16.0.199
117Created a new subnet:
118+------------------+-------------------------------------------------------+
119| Field | Value |
120+------------------+-------------------------------------------------------+
121| allocation_pools | {"start": "172.16.0.100", "end": "172.16.0.199"} |
122| cidr | 172.16.0.0/24 |
123| dns_nameservers | |
124| enable_dhcp | True |
125| gateway_ip | |
126| host_routes | {"destination": "0.0.0.0/0", "nexthop": "172.16.0.1"} |
127| id | bfae1a19-e15f-4e5e-94f2-018f24abbc2e |
128| ip_version | 4 |
129| name | DMZ_SUBNET |
130| network_id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a |
131| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
132+------------------+-------------------------------------------------------+
133
134
135Boot the image and test connectivity
136------------------------------------
137(note with our current config you might only be able to run 2 instances at
138 any one time, so you will end up juggling them to test connectivity)
139
140nova boot --flavor=m1.small --image=myFirstImage \
141 --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE
142+--------------------------------------+-----------------------------------------------------+
143| Property | Value |
144+--------------------------------------+-----------------------------------------------------+
145| OS-DCF:diskConfig | MANUAL |
146| OS-EXT-AZ:availability_zone | nova |
147| OS-EXT-SRV-ATTR:host | - |
148| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
149| OS-EXT-SRV-ATTR:instance_name | instance-00000009 |
150| OS-EXT-STS:power_state | 0 |
151| OS-EXT-STS:task_state | scheduling |
152| OS-EXT-STS:vm_state | building |
153| OS-SRV-USG:launched_at | - |
154| OS-SRV-USG:terminated_at | - |
155| accessIPv4 | |
156| accessIPv6 | |
157| adminPass | 7itgDwsdY8d4 |
158| config_drive | |
159| created | 2014-04-10T14:31:21Z |
160| flavor | m1.small (2) |
161| hostId | |
162| id | 630affe0-d497-4211-87bb-383254d60428 |
163| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) |
164| key_name | - |
165| metadata | {} |
166| name | INSIDE_INSTANCE |
167| os-extended-volumes:volumes_attached | [] |
168| progress | 0 |
169| security_groups | default |
170| status | BUILD |
171| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
172| updated | 2014-04-10T14:31:21Z |
173| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
174+--------------------------------------+-----------------------------------------------------+
175
176nova boot --flavor=m1.small --image=myFirstImage \
177 --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE2
178+--------------------------------------+-----------------------------------------------------+
179| Property | Value |
180+--------------------------------------+-----------------------------------------------------+
181| OS-DCF:diskConfig | MANUAL |
182| OS-EXT-AZ:availability_zone | nova |
183| OS-EXT-SRV-ATTR:host | - |
184| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
185| OS-EXT-SRV-ATTR:instance_name | instance-0000000a |
186| OS-EXT-STS:power_state | 0 |
187| OS-EXT-STS:task_state | scheduling |
188| OS-EXT-STS:vm_state | building |
189| OS-SRV-USG:launched_at | - |
190| OS-SRV-USG:terminated_at | - |
191| accessIPv4 | |
192| accessIPv6 | |
193| adminPass | BF9p6tftS2xJ |
194| config_drive | |
195| created | 2014-04-10T14:32:07Z |
196| flavor | m1.small (2) |
197| hostId | |
198| id | ff94ee07-ae24-4785-9d51-26de2c23da60 |
199| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) |
200| key_name | - |
201| metadata | {} |
202| name | INSIDE_INSTANCE2 |
203| os-extended-volumes:volumes_attached | [] |
204| progress | 0 |
205| security_groups | default |
206| status | BUILD |
207| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
208| updated | 2014-04-10T14:32:08Z |
209| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
210+--------------------------------------+-----------------------------------------------------+
211
212root@controller:~# nova list
213+--------------------------------------+------------------+--------+------------+-------------+----------------------------+
214| ID | Name | Status | Task State | Power State | Networks |
215+--------------------------------------+------------------+--------+------------+-------------+----------------------------+
216| 630affe0-d497-4211-87bb-383254d60428 | INSIDE_INSTANCE | ACTIVE | - | Running | INSIDE_NET=192.168.100.100 |
217| ff94ee07-ae24-4785-9d51-26de2c23da60 | INSIDE_INSTANCE2 | ACTIVE | - | Running | INSIDE_NET=192.168.100.102 |
218+--------------------------------------+------------------+--------+------------+-------------+----------------------------+
219
220nova boot --flavor=m1.small --image=myFirstImage \
221 --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE
222+--------------------------------------+-----------------------------------------------------+
223| Property | Value |
224+--------------------------------------+-----------------------------------------------------+
225| OS-DCF:diskConfig | MANUAL |
226| OS-EXT-AZ:availability_zone | nova |
227| OS-EXT-SRV-ATTR:host | - |
228| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
229| OS-EXT-SRV-ATTR:instance_name | instance-0000000d |
230| OS-EXT-STS:power_state | 0 |
231| OS-EXT-STS:task_state | scheduling |
232| OS-EXT-STS:vm_state | building |
233| OS-SRV-USG:launched_at | - |
234| OS-SRV-USG:terminated_at | - |
235| accessIPv4 | |
236| accessIPv6 | |
237| adminPass | SvzSpnmB6mXJ |
238| config_drive | |
239| created | 2014-04-10T14:42:53Z |
240| flavor | m1.small (2) |
241| hostId | |
242| id | 0dab2712-5f1d-4559-bfa4-d09c6304418c |
243| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) |
244| key_name | - |
245| metadata | {} |
246| name | DMZ_INSTANCE |
247| os-extended-volumes:volumes_attached | [] |
248| progress | 0 |
249| security_groups | default |
250| status | BUILD |
251| tenant_id | b5890ba3fb234347ae317ca2f8358663 |
252| updated | 2014-04-10T14:42:54Z |
253| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 |
254+--------------------------------------+-----------------------------------------------------+
255
256nova boot --flavor=m1.small --image=myFirstImage \
257 --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE2
258...
259
260nova console-log INSIDE_INSTANCE2
261---
262...skip
263Starting network...
264udhcpc (v1.18.5) started
265Sending discover...
266Sending select for 192.168.100.102...
267...skip
268
269ping
270---
271
272You should also be able to ping instances on the same subnet but not
273those on the other subnet. The controller and compute can not ping
274instances on either network (with metadata implemented the controller
275should be able to, but currently the metadata agent is not available.)
276
277dump-flows
278----------
279(note the 'vlan' tags)
280root@compute:~# ovs-ofctl dump-flows br-int
281NXST_FLOW reply (xid=0x4):
282 cookie=0x0, duration=1640.378s, table=0, n_packets=3, n_bytes=788, idle_age=1628, priority=3,in_port=6,dl_vlan=300 actions=mod_vlan_vid:2,NORMAL
283 cookie=0x0, duration=2332.714s, table=0, n_packets=6, n_bytes=1588, idle_age=2274, priority=3,in_port=6,dl_vlan=200 actions=mod_vlan_vid:1,NORMAL
284 cookie=0x0, duration=2837.737s, table=0, n_packets=22, n_bytes=1772, idle_age=1663, priority=2,in_port=6 actions=drop
285 cookie=0x0, duration=2837.976s, table=0, n_packets=53, n_bytes=5038, idle_age=1535, priority=1 actions=NORMAL
286
287
288
289The final Open vSwitch configs
290------------------------------
291
292Controller
293---
294root@controller:~# ovs-vsctl show
295524a6c84-226d-427b-8efa-732ed7e7fa43
296 Bridge br-tun
297 Port "gre-2"
298 Interface "gre-2"
299 type: gre
300 options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"}
301 Port br-tun
302 Interface br-tun
303 type: internal
304 Port patch-int
305 Interface patch-int
306 type: patch
307 options: {peer=patch-tun}
308 Bridge "br-eth0"
309 Port "eth0"
310 Interface "eth0"
311 Port "br-eth0"
312 Interface "br-eth0"
313 type: internal
314 Port "phy-br-eth0"
315 Interface "phy-br-eth0"
316 Bridge br-int
317 Port patch-tun
318 Interface patch-tun
319 type: patch
320 options: {peer=patch-int}
321 Port "tapafbbdd15-e7"
322 tag: 1
323 Interface "tapafbbdd15-e7"
324 type: internal
325 Port "int-br-eth0"
326 Interface "int-br-eth0"
327 Port "tapa50c1a18-34"
328 tag: 2
329 Interface "tapa50c1a18-34"
330 type: internal
331 Port br-int
332 Interface br-int
333 type: internal
334 ovs_version: "2.0.0"
335
336
337Compute
338---
339root@compute:~# ovs-vsctl show
34099d365d2-f74e-40a8-b9a0-5bb60353675d
341 Bridge br-tun
342 Port br-tun
343 Interface br-tun
344 type: internal
345 Port "gre-1"
346 Interface "gre-1"
347 type: gre
348 options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"}
349 Port patch-int
350 Interface patch-int
351 type: patch
352 options: {peer=patch-tun}
353 Bridge br-int
354 Port br-int
355 Interface br-int
356 type: internal
357 Port "int-br-eth0"
358 Interface "int-br-eth0"
359 Port patch-tun
360 Interface patch-tun
361 type: patch
362 options: {peer=patch-int}
363 Port "tap78e1ac37-6c"
364 tag: 2
365 Interface "tap78e1ac37-6c"
366 Port "tap315398a4-cd"
367 tag: 1
368 Interface "tap315398a4-cd"
369 Bridge "br-eth0"
370 Port "phy-br-eth0"
371 Interface "phy-br-eth0"
372 Port "eth0"
373 Interface "eth0"
374 Port "br-eth0"
375 Interface "br-eth0"
376 type: internal
377 ovs_version: "2.0.0"
378
379
380References
381----------
382http://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks.html \ No newline at end of file
diff --git a/meta-openstack/Documentation/README.spice b/meta-openstack/Documentation/README.spice
new file mode 100644
index 0000000..a6b93b2
--- /dev/null
+++ b/meta-openstack/Documentation/README.spice
@@ -0,0 +1,82 @@
1OpenStack offers two types of console support, VNC support and SPICE.
2The VNC protocol is fairly limited, lacking support for multiple monitors,
3bi-directional audio, reliable cut+paste, video streaming and more.
4SPICE is a new protocol which aims to address all the limitations in VNC,
5to provide good remote desktop support.
6
7The Controller will have both the proxy for vnc and for spice html5
8running. The nova-spicehtml5proxy service communicates directly with
9the hypervisor process using SPICE.
10
11OpenStack's Dashboard uses a SPICE HTML5 widget in its console tab
12to communicate with the nova-spicehtml5proxy service. Since both proxies
13are running, the Dashboard will automatically attempt to connect to
14whichever console is provided by the compute node.
15
16Another way to access the spice console is from the controller,
17run the following command:
18
19 nova get-spice-console myinstance spice-html5
20
21This will give you an URL which will directly give you access to the console
22(instead of from Horizon).
23
24The enable or disable VNC/SPICE, on the compute node, modify
25/etc/nova/nova.conf.
26
27Options for configuring SPICE as the console for OpenStack Compute can be
28 found below.
29
30---------------------------------------------------------------------------------
31 Configuration option=Default value (Type) Description
32
33 agent_enabled=True (BoolOpt)enable spice guest agent support
34
35 enabled=False (BoolOpt)enable spice related features
36
37 html5proxy_base_url=http://127.0.0.1:6080/spice_auto.html
38 (StrOpt)location of spice html5
39 console proxy, in the form
40 "http://127.0.0.1:6080/spice_auto.html"
41
42 keymap=en-us (StrOpt)keymap for spice
43
44 server_listen=127.0.0.1 (StrOpt)IP address on which instance
45 spice
46 server should listen
47
48 server_proxyclient_address=127.0.0.1 (StrOpt)the address to which proxy
49 clients (like nova-spicehtml5proxy)
50 should connect
51---------------------------------------------------------------------------------
52
53Combinations/behaviour from Compute:
54
551. VNC will be provided
56
57vnc_enabled=True
58enabled=True
59agent_enabled=True
60
612. SPICE will be provided
62
63vnc_enabled=False
64enabled=True
65agent_enabled=True
66
673. VNC will be provided
68
69vnc_enabled=True
70enabled=False
71agent_enabled=False
72
734. No console will be provided
74
75vnc_enabled=False
76enabled=False
77agent_enabled=False
78
79After nova.conf is changed on the compute node, restart nova-compute
80service. If an instance was running beforehand, it will be necessary to
81restart (reboot, soft or hard) the instance to get the new console.
82
diff --git a/meta-openstack/Documentation/README.tempest b/meta-openstack/Documentation/README.tempest
new file mode 100644
index 0000000..884a28a
--- /dev/null
+++ b/meta-openstack/Documentation/README.tempest
@@ -0,0 +1,55 @@
1# enable in local.conf via:
2OPENSTACK_CONTROLLER_EXTRA_INSTALL += "tempest keystone-tests glance-tests cinder-tests \
3 horizon-tests heat-tests neutron-tests nova-tests ceilometer-tests"
4
5# For the tempest built-in tests:
6---------------------------------
7 # edit /etc/tempest/tempest.conf to suit details of the system
8 % cd /usr/lib/python2.7/site-packages
9 % nosetests --verbose tempest/api
10
11OR (less reliable)
12
13 % cd /usr/lib/python2.7/site-packages
14 % cp /etc/tempest/.testr.conf .
15 % testr init
16 % testr run --parallel tempest
17
18# For individual package tests
19------------------------------
20# typical:
21 % cd /usr/lib/python2.7/site-packages/<project>
22 % /etc/<project>/run_tests.sh --verbose -N
23
24# Cinder:
25# Notes: tries to run setup.py, --debug works around part of the issue
26 % cd /usr/lib/python2.7/site-packages/
27 % nosetests --verbose cinder/tests
28
29# Neutron:
30# Notes: use nosetests directly
31 % cd /usr/lib/python2.7/site-packages/
32 % nosetests --verbose neutron/tests
33
34# Nova:
35# Notes: vi /usr/lib/python2.7/site-packages/nova/tests/conf_fixture.py
36# modify api-paste.ini reference to be /etc/nova/api-paste.ini, the conf
37# file isn't being read properly, so some tests will fail to run
38 % cd /
39 % nosetests --verbose /usr/lib/python2.7/site-packages/nova/tests
40
41# keystone:
42#
43
44# Other Notes:
45--------------
46
47 1) testr: not so good, can be missing, some tools are. use nostests directly
48 instead.
49 2) all run_tests.sh are provided, even though they are similar
50
51
52
53
54
55