From 649327f80dc331943d448e87f73ecaadcc78a22a Mon Sep 17 00:00:00 2001 From: Bruce Ashfield Date: Fri, 23 May 2014 23:49:49 -0400 Subject: docs: move more READMEs into Documentation Signed-off-by: Bruce Ashfield --- meta-openstack/Documentation/README.networking | 208 ++++++++++ .../Documentation/README.networking_flat | 249 ++++++++++++ .../Documentation/README.networking_l3_router | 450 +++++++++++++++++++++ .../Documentation/README.networking_vlan | 382 +++++++++++++++++ meta-openstack/Documentation/README.spice | 82 ++++ meta-openstack/Documentation/README.tempest | 55 +++ meta-openstack/README.networking | 208 ---------- meta-openstack/README.networking_flat | 249 ------------ meta-openstack/README.networking_l3_router | 450 --------------------- meta-openstack/README.networking_vlan | 382 ----------------- meta-openstack/README.spice | 82 ---- meta-openstack/README.swift | 447 -------------------- meta-openstack/README.tempest | 55 --- 13 files changed, 1426 insertions(+), 1873 deletions(-) create mode 100644 meta-openstack/Documentation/README.networking create mode 100644 meta-openstack/Documentation/README.networking_flat create mode 100644 meta-openstack/Documentation/README.networking_l3_router create mode 100644 meta-openstack/Documentation/README.networking_vlan create mode 100644 meta-openstack/Documentation/README.spice create mode 100644 meta-openstack/Documentation/README.tempest delete mode 100644 meta-openstack/README.networking delete mode 100644 meta-openstack/README.networking_flat delete mode 100644 meta-openstack/README.networking_l3_router delete mode 100644 meta-openstack/README.networking_vlan delete mode 100644 meta-openstack/README.spice delete mode 100644 meta-openstack/README.swift delete mode 100644 meta-openstack/README.tempest (limited to 'meta-openstack') diff --git a/meta-openstack/Documentation/README.networking b/meta-openstack/Documentation/README.networking new file mode 100644 index 0000000..2299de3 --- /dev/null +++ b/meta-openstack/Documentation/README.networking @@ -0,0 +1,208 @@ +Networking +============== + +Description +----------- +OpenStack provides tools to setup many different network topologies using +tunnels, Vlans, GREs... the list goes on. In this document we describe how to +setup 3 basic network configurations which can be used as building blocks for a +larger network deployment. Going through these setups also tests that the +Open vSwitch plugin and DHCP and l3 agents are operating correctly. + + +Assumptions +----------- +The following assumes you have built the controller and compute nodes for the +qemux86-64 machine as described in README.setup and have been able to spin-up an +instance successfully. + + +Prerequisites +------------- + +1. Following the instructions in README.setup to spin-up your controller and +compute nodes in VMs will result in NATed tap interfaces on the host. While this +is fine for basic use it will not allow you to use things like GRE tunnels as +the packet will appear to be coming from the host when it arrives at the other +end of the tunnel and will therefore be rejected (since the src IP will not +match the GRE's remote_ip). To get around this we must setup an Open vSwitch +bridge on the host and attach the taps. Open vSwitch must therefore be installed +and running on the host. + +On Ubuntu systems this may be done via: +sudo apt-get install openvswitch-switch openvswitch-common + +2. Also since we will be using an Open vSwitch on the host we need to ensure the +controller and compute network interfaces have different MAC addresses. We +therefor must modify the runqemu script as per the following: + +--- a/scripts/runqemu-internal ++++ b/scripts/runqemu-internal +@@ -252,7 +252,7 @@ else + KERNEL_NETWORK_CMD="ip=192.168.7.$n2::192.168.7.$n1:255.255.255.0" + QEMU_TAP_CMD="-net tap,vlan=0,ifname=$TAP,script=no,downscript=no" + if [ "$KVM_ACTIVE" = "yes" ]; then +- QEMU_NETWORK_CMD="-net nic,model=virtio $QEMU_TAP_CMD,vhost=on" ++ QEMU_NETWORK_CMD="-net nic,macaddr=52:54:00:12:34:$(printf '%x' $((RANDOM % 170))),model=virtio $QEMU_TAP_CMD,vhost=on" + DROOT="/dev/vda" + ROOTFS_OPTIONS="-drive file=$ROOTFS,if=virtio" + else +--- +this will not guarantee distinct MAC addresses but most of the time they will be. + + +Host Open vSwitch bridge +------------------------ +As per the prerequisites we need to setup a bridge on the host to avoid NATed +tap interfaces. After you have used 'runqemu' to boot your controller and +compute nodes perform the following instructions on your host + +(I will assume tap0 - controller, tap1 - compute, use 'ip a s' or 'ifconfig' to +identify the tap interfaces) + +sudo ovs-vsctl add-br br-int +sudo ovs-vsctl add-port br-int tap0 +sudo ovs-vsctl add-port br-int tap1 +sudo ip address del 192.168.7.1/24 dev tap0 +sudo ip address del 192.168.7.3/24 dev tap1 +sudo ip address add 192.168.7.1/24 broadcast 192.168.7.255 dev br-int +sudo route del 192.168.7.2 tap0 +sudo route del 192.168.7.4 tap1 + + +NOTE: Any time you reboot the controller or compute nodes you will +want to remove and re-add the port via: +# ovs-vsctl del-port br-int tapX +# ovs-vsctl add-port br-int tapX +# ip address del 192.168.7.Y/24 dev tapX +(where X and Y are substituted accordingly) +This will also ensure the ARP tables in the vSwitch are updated since +chances are the MAC address will have changed on a reboot due to the +MAC randomizer of prerequisite 2. + + +Controller/Compute network setup +-------------------------------- +The neutron Open vSwitch plugin expects several bridges to exist on +the controller and compute nodes. When the controller and compute +nodes are first booted however these do not exist and depending on how +you are setting up your network this is subject to change and as such +is not 'baked' in to our images. This would normally be setup by +cloud-init, chef, cobbler or some other deployment scripts. Here we +will accomplish it by hand. + +On first boot your network will look like this: (controller node) +---snip--- +root@controller:~# ip a show eth0 +2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 + link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff + inet 192.168.7.2/24 brd 192.168.7.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::5054:ff:fe12:34a9/64 scope link + valid_lft forever preferred_lft forever + +root@controller:~# ovs-vsctl show +524a6c84-226d-427b-8efa-732ed7e7fa43 + Bridge br-int + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port br-int + Interface br-int + type: internal + Bridge br-tun + Port br-tun + Interface br-tun + type: internal + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + ovs_version: "2.0.0" +---snip--- + +To complete the expected network configuration you must add a bridge +which will contain the physical interface as one of its ports and move +the IP address from the interface to the bridge. The following will +accomplish this: + +ovs-vsctl add-br br-eth0 +ovs-vsctl add-port br-eth0 eth0 +ip address del 192.168.7.2/24 dev eth0 +ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0 +route add default gw 192.168.7.1 + +And now you network will look like the following: +---snip--- +root@controller:~# ip a s +...skip +2: eth0: mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000 + link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff +...skip +7: br-eth0: mtu 1500 qdisc noqueue state UNKNOWN group default + link/ether ae:f8:be:7c:78:42 brd ff:ff:ff:ff:ff:ff + inet 192.168.7.2/24 scope global br-eth0 + valid_lft forever preferred_lft forever + inet6 fe80::e453:1fff:fec1:79ff/64 scope link + valid_lft forever preferred_lft forever + +root@controller:~# ovs-vsctl show +524a6c84-226d-427b-8efa-732ed7e7fa43 + Bridge "br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + Bridge br-int + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port br-int + Interface br-int + type: internal + Bridge br-tun + Port br-tun + Interface br-tun + type: internal + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + ovs_version: "2.0.0" + +At this point you will want to restart the neutron network services + +(controller) +/etc/init.d/neutron-openvswitch-agent stop +/etc/init.d/neutron-dhcp-agent stop +/etc/init.d/neutron-server reload +/etc/init.d/neutron-dhcp-agent start +/etc/init.d/neutron-openvswitch-agent start + +(Compute) +/etc/init.d/neutron-openvswitch-agent stop +/etc/init.d/nova-compute reload +/etc/init.d/neutron-openvswitch-agent start + + +NOTE: on a reboot the Open vSwitch configuration will remain but at +this point in time you will need to manually move the IP address from +the eth0 interface to the br-eth0 interface using + +ip address del 192.168.7.2/24 dev eth0 +ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0 + +With this network configuration on the controller and similar +configuration on the compute node (just replace 192.168.7.2 with +192.168.7.4) everything is ready to configure any of the 3 network +sample configurations. + +Further reading +--------------- + +README.networking_flat +README.networking_vlan +README.networking_l3_router \ No newline at end of file diff --git a/meta-openstack/Documentation/README.networking_flat b/meta-openstack/Documentation/README.networking_flat new file mode 100644 index 0000000..ab18f6f --- /dev/null +++ b/meta-openstack/Documentation/README.networking_flat @@ -0,0 +1,249 @@ +Networking - FLAT network +========================= + +Description +----------- +The flat network will have the VMs share the management network +(192.168.7.0/24). The dhcp-agent will provide the VMs addresses +within the subnet and within its provisioned range. This type of +network will not typically be deployed as everything is accessible by +everything else (VMs can access VMs and the compute and controller +nodes) + + +Assumptions +----------- +It is assumed you have completed the steps described in +README.networking and have provisioned the host vSwitch as well as +created the br-eth0 bridges on the controller and compute nodes. + +At this point you should be able to ping 192.168.7.4 from 192.168.7.4 +and vise versa. + +You have built your controller image including the cirros image (for +which you have already added the image to glance as myFirstImage). + +You have run 'source /etc/nova/openrc' + +Configuration updates +--------------------- +On the controller and (all) compute nodes you must edit the file +/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini + +In the [OVS] section set +network_vlan_ranges = ph-eth0:1:1 +bridge_mappings = ph-eth0:br-eth0 + +(*** on compute nodes edit local_ip as well [192.168.7.4]***) + +Restart some services to allow these changes to take effect: +/etc/init.d/neutron-openvswitch-agent reload +(on controller) +/etc/init.d/neutron-server reload +/etc/init.d/neutron-dhcp-agent reload +(on compute) +/etc/init.d/nova-compute reload + + +Create the net and subnet +------------------------- +neutron net-create --provider:physical_network=ph-eth0 \ + --provider:network_type=flat \ + --shared MY_FLAT_NET +Created a new network: ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 | +| name | MY_FLAT_NET | +| provider:network_type | flat | +| provider:physical_network | ph-eth0 | +| provider:segmentation_id | | +| shared | True | +| status | ACTIVE | +| subnets | | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++---------------------------+--------------------------------------+ + + +neutron subnet-create MY_FLAT_NET 192.168.7.0/24 --name MY_FLAT_SUBNET \ + --no-gateway --host-route destination=0.0.0.0/0,nexthop=192.168.7.1 \ + --allocation-pool start=192.168.7.230,end=192.168.7.234 +Created a new subnet: ++------------------+--------------------------------------------------------+ +| Field | Value | ++------------------+--------------------------------------------------------+ +| allocation_pools | {"start": "192.168.7.230", "end": "192.168.7.234"} | +| cidr | 192.168.7.0/24 | +| dns_nameservers | | +| enable_dhcp | True | +| gateway_ip | | +| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.7.1"} | +| id | bfa99d99-2ba5-47e9-b71e-0bd8a2961e08 | +| ip_version | 4 | +| name | MY_FLAT_SUBNET | +| network_id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++------------------+--------------------------------------------------------+ + +Boot the image and test connectivity +------------------------------------ +nova boot --image myFirstImage --flavor m1.small \ + --nic net-id=3263aa7f-b86c-4ad3-a28c-c78d4c711583 myinstance ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-00000003 | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | 7Qe9nFekCjYD | +| config_drive | | +| created | 2014-04-10T04:13:38Z | +| flavor | m1.small (2) | +| hostId | | +| id | f85da1da-c318-49fb-8da9-c07644400d4c | +| image | myFirstImage (1da089b1-164d-45d6-9b6c-002f3edb8a7b) | +| key_name | - | +| metadata | {} | +| name | myinstance | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T04:13:38Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +nova list ++--------------------------------------+------------+--------+------------+-------------+---------------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+---------------------------+ +| f85da1da-c318-49fb-8da9-c07644400d4c | myinstance | ACTIVE | - | Running | MY_FLAT_NET=192.168.7.231 | ++--------------------------------------+------------+--------+------------+-------------+---------------------------+ + +nova console-log myinstance +--- +...skip +Starting logging: OK +Initializing random number generator... done. +Starting network... +udhcpc (v1.18.5) started +Sending discover... +Sending select for 192.168.7.231... +Lease of 192.168.7.231 obtained, lease time 86400 +deleting routers +...skip + +ping +--- +root@controller:~# ping -c 1 192.168.7.231 +PING 192.168.7.231 (192.168.7.231) 56(84) bytes of data. +64 bytes from 192.168.7.231: icmp_seq=1 ttl=64 time=2.98 ms + +--- 192.168.7.231 ping statistics --- +1 packets transmitted, 1 received, 0% packet loss, time 0ms +rtt min/avg/max/mdev = 2.988/2.988/2.988/0.000 ms + +You should also be able to ping the compute or controller or other VMs +(if you start them) from within a VM. Pinging targets outside the +subnet requires that you ensure the various interfaces, such as eth0 +have promisc on 'ip link set eth0 promisc on' + +The final Open vSwitch configs +------------------------------ + +Controller +--- +root@controller:~# ovs-vsctl show +524a6c84-226d-427b-8efa-732ed7e7fa43 + Bridge "br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + Port "phy-br-eth0" + Interface "phy-br-eth0" + Bridge br-int + Port "tap549fb0c7-1a" + tag: 1 + Interface "tap549fb0c7-1a" + type: internal + Port "int-br-eth0" + Interface "int-br-eth0" + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port br-int + Interface br-int + type: internal + Bridge br-tun + Port "gre-2" + Interface "gre-2" + type: gre + options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} + Port br-tun + Interface br-tun + type: internal + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + ovs_version: "2.0.0" + + +Compute +--- +root@compute:~# ovs-vsctl show +99d365d2-f74e-40a8-b9a0-5bb60353675d + Bridge br-tun + Port "gre-1" + Interface "gre-1" + type: gre + options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} + Port br-tun + Interface br-tun + type: internal + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + Bridge br-int + Port br-int + Interface br-int + type: internal + Port "int-br-eth0" + Interface "int-br-eth0" + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port "tap93a74250-ef" + tag: 1 + Interface "tap93a74250-ef" + Bridge "br-eth0" + Port "phy-br-eth0" + Interface "phy-br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + ovs_version: "2.0.0" + + +References +---------- +http://developer.rackspace.com/blog/neutron-networking-simple-flat-network.html \ No newline at end of file diff --git a/meta-openstack/Documentation/README.networking_l3_router b/meta-openstack/Documentation/README.networking_l3_router new file mode 100644 index 0000000..a16f8c4 --- /dev/null +++ b/meta-openstack/Documentation/README.networking_l3_router @@ -0,0 +1,450 @@ +Networking - l3 router +========================= + +Description +----------- +Using provider networks (such as we did for flat and vlan usecases) +does not scale to large deployments, their downsides become quickly +apparent. The l3-agent provides the ability to create routers that can +handle routing between directly connected LAN interfaces and a single +WAN interface. + +Here we setup a virtual router with a connection to a provider network +(vlan) and 2 attached subnets. We don't use floating IPs for this +demo. + + +Assumptions +----------- +It is assumed you have completed the steps described in +README.networking and have provisioned the host vSwitch as well as +created the br-eth0 bridges on the controller and compute nodes. + +At this point you should be able to ping 192.168.7.4 from 192.168.7.4 +and vise versa. + +You have built your controller image including the cirros image (for +which you have already added the image to glance as myFirstImage). + +You have run 'source /etc/nova/openrc' + +Configuration updates +--------------------- +On the host Open vSwitch add an IP for 192.168.100.1/22 +sudo ip address add 192.168.100.1/22 broadcast 192.168.255.255 dev br-int + +On the controller and (all) compute nodes you must edit the file +/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini + +In the [OVS] section set +network_vlan_ranges = ph-eth0:1998:1998 +bridge_mappings = ph-eth0:br-eth0 + +(*** on compute nodes edit local_ip as well [192.168.7.4]***) + +Restart some services to allow these changes to take effect: +/etc/init.d/neutron-openvswitch-agent reload +(on controller) +/etc/init.d/neutron-server reload +/etc/init.d/neutron-dhcp-agent reload +(on compute) +/etc/init.d/nova-compute reload + + +** edit /etc/neutron/l3-agent.ini +use_namespaces = True +external_network_bridge = + +/etc/init.d/neutron-l3-agent restart + + +Create the provider network +--------------------------- +neutron net-create --provider:physical_network=ph-eth0 \ + --provider:network_type=vlan --provider:segmentation_id=1998 \ + --shared --router:external=true GATEWAY_NET + +neutron subnet-create GATEWAY_NET 192.168.100.0/22 \ + --name GATEWAY_SUBNET --gateway=192.168.100.1 \ + --allocation-pool start=192.168.101.1,end=192.168.103.254 + + +Create the router +----------------- +neutron router-create NEUTRON-ROUTER +Created a new router: ++-----------------------+--------------------------------------+ +| Field | Value | ++-----------------------+--------------------------------------+ +| admin_state_up | True | +| external_gateway_info | | +| id | b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 | +| name | NEUTRON-ROUTER | +| status | ACTIVE | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++-----------------------+--------------------------------------+ + +neutron router-gateway-set NEUTRON-ROUTER GATEWAY_NET +Set gateway for router NEUTRON-ROUTER + +Inspect the created network namespaces +-------------------------------------- +root@controller:~# ip netns +qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 +qdhcp-498fa1f2-87de-4874-8ca9-f4ba3e394d2a + +ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +2: sit0: mtu 1480 qdisc noop state DOWN group default + link/sit 0.0.0.0 brd 0.0.0.0 +20: qg-19f6d85f-a6: mtu 1500 qdisc noqueue state UNKNOWN group default + link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff + inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6 + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:feb8:1e9d/64 scope link + valid_lft forever preferred_lft forever + + +Attach tenant networks +---------------------- +neutron net-create --provider:network_type=gre --provider:segmentation_id=10 \ + --shared APPS_NET +Created a new network: ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | 52f4549f-aeed-4fcf-997b-4349f591cd5f | +| name | APPS_NET | +| provider:network_type | gre | +| provider:physical_network | | +| provider:segmentation_id | 10 | +| shared | True | +| status | ACTIVE | +| subnets | | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++---------------------------+--------------------------------------+ + +neutron net-create --provider:network_type=gre --provider:segmentation_id=20 \ + --shared DMZ_NET +Created a new network: ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 | +| name | DMZ_NET | +| provider:network_type | gre | +| provider:physical_network | | +| provider:segmentation_id | 20 | +| shared | True | +| status | ACTIVE | +| subnets | | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++---------------------------+--------------------------------------+ + +neutron subnet-create APPS_NET 10.241.0.0/22 --name APPS_SUBNET +Created a new subnet: ++------------------+------------------------------------------------+ +| Field | Value | ++------------------+------------------------------------------------+ +| allocation_pools | {"start": "10.241.0.2", "end": "10.241.3.254"} | +| cidr | 10.241.0.0/22 | +| dns_nameservers | | +| enable_dhcp | True | +| gateway_ip | 10.241.0.1 | +| host_routes | | +| id | 45e7d887-1c4c-485a-9247-2a2bec9e3714 | +| ip_version | 4 | +| name | APPS_SUBNET | +| network_id | 52f4549f-aeed-4fcf-997b-4349f591cd5f | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++------------------+------------------------------------------------+ + +neutron subnet-create DMZ_NET 10.242.0.0/22 --name DMZ_SUBNET +Created a new subnet: ++------------------+------------------------------------------------+ +| Field | Value | ++------------------+------------------------------------------------+ +| allocation_pools | {"start": "10.242.0.2", "end": "10.242.3.254"} | +| cidr | 10.242.0.0/22 | +| dns_nameservers | | +| enable_dhcp | True | +| gateway_ip | 10.242.0.1 | +| host_routes | | +| id | 2deda040-be04-432b-baa6-3a2219d22f20 | +| ip_version | 4 | +| name | DMZ_SUBNET | +| network_id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++------------------+------------------------------------------------+ + +neutron router-interface-add NEUTRON-ROUTER APPS_SUBNET +Added interface 58f3db35-f5df-4fd1-9735-4ff13dd342de to router NEUTRON-ROUTER. + +neutron router-interface-add NEUTRON-ROUTER DMZ_SUBNET +Added interface 9252ec29-7aac-4550-821c-f910f10680cf to router NEUTRON-ROUTER. + +ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever +2: sit0: mtu 1480 qdisc noop state DOWN group default + link/sit 0.0.0.0 brd 0.0.0.0 +20: qg-19f6d85f-a6: mtu 1500 qdisc noqueue state UNKNOWN group default + link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff + inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6 + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:feb8:1e9d/64 scope link + valid_lft forever preferred_lft forever +21: qr-58f3db35-f5: mtu 1500 qdisc noqueue state UNKNOWN group default + link/ether fa:16:3e:76:ec:23 brd ff:ff:ff:ff:ff:ff + inet 10.241.0.1/22 brd 10.241.3.255 scope global qr-58f3db35-f5 + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fe76:ec23/64 scope link + valid_lft forever preferred_lft forever +22: qr-9252ec29-7a: mtu 1500 qdisc noqueue state UNKNOWN group default + link/ether fa:16:3e:fb:98:06 brd ff:ff:ff:ff:ff:ff + inet 10.242.0.1/22 brd 10.242.3.255 scope global qr-9252ec29-7a + valid_lft forever preferred_lft forever + inet6 fe80::f816:3eff:fefb:9806/64 scope link + valid_lft forever preferred_lft forever + +Note the two new interfaces. +1 connection to the provider network +2 connections to the subnets (1 to APPS_SUBNET, 1 to DMZ_SUBNET) + +Boot an instance +--------------- +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=52f4549f-aeed-4fcf-997b-4349f591cd5f APPS_INSTANCE ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-0000000e | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | jdLkr4i6ATvQ | +| config_drive | | +| created | 2014-04-10T16:27:31Z | +| flavor | m1.small (2) | +| hostId | | +| id | fc849bb9-54d3-4a9a-99a4-6346a6eef404 | +| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | +| key_name | - | +| metadata | {} | +| name | APPS_INSTANCE | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T16:27:31Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=eeb07b09-4b4a-4c2c-9060-0b8e414a9279 DMZ_INSTANCE ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-0000000f | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | 4d7UsUJhSpBd | +| config_drive | | +| created | 2014-04-10T16:29:25Z | +| flavor | m1.small (2) | +| hostId | | +| id | f281c349-d49c-4d6c-bf56-74f04f2e8aec | +| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | +| key_name | - | +| metadata | {} | +| name | DMZ_INSTANCE | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T16:29:25Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +Check connectivity +------------------ +nova console-log APPS_INSTANCE +...skip +Starting network... +udhcpc (v1.18.5) started +Sending discover... +Sending select for 10.241.0.2... +Lease of 10.241.0.2 obtained, lease time 86400 +..skip + +nova console-log DMZ_INSTANCE +...skip +Starting network... +udhcpc (v1.18.5) started +Sending discover... +Sending select for 10.242.0.2... +Lease of 10.242.0.2 obtained, lease time 86400 +...skip + +root@controller:~# nova list ++--------------------------------------+---------------+--------+------------+-------------+---------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------+--------+------------+-------------+---------------------+ +| fc849bb9-54d3-4a9a-99a4-6346a6eef404 | APPS_INSTANCE | ACTIVE | - | Running | APPS_NET=10.241.0.2 | +| f281c349-d49c-4d6c-bf56-74f04f2e8aec | DMZ_INSTANCE | ACTIVE | - | Running | DMZ_NET=10.242.0.2 | ++--------------------------------------+---------------+--------+------------+-------------+---------------------+ + + +ping +--- +Since we are not using floating IPs you will only be able ping from inside the route namespace + +# ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 \ + ping 10.241.0.2 -c 1 +PING 10.241.0.2 (10.241.0.2) 56(84) bytes of data. +64 bytes from 10.241.0.2: icmp_seq=1 ttl=64 time=6.32 ms + +--- 10.241.0.2 ping statistics --- +1 packets transmitted, 1 received, 0% packet loss, time 0ms +rtt min/avg/max/mdev = 6.328/6.328/6.328/0.000 ms + +# ping 10.241.0.2 -c 1 +connect: Network is unreachable + + +The final Open vSwitch configs +------------------------------ + +Controller +--- +root@controller:~# ovs-vsctl show +524a6c84-226d-427b-8efa-732ed7e7fa43 + Bridge "br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + Port "phy-br-eth0" + Interface "phy-br-eth0" + Bridge br-tun + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + Port "gre-2" + Interface "gre-2" + type: gre + options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} + Port br-tun + Interface br-tun + type: internal + Bridge br-int + Port "qr-58f3db35-f5" + tag: 2 + Interface "qr-58f3db35-f5" + type: internal + Port "tap6e65f2e5-39" + tag: 3 + Interface "tap6e65f2e5-39" + type: internal + Port "qr-9252ec29-7a" + tag: 3 + Interface "qr-9252ec29-7a" + type: internal + Port "int-br-eth0" + Interface "int-br-eth0" + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port "tapcf2a0e68-6b" + tag: 2 + Interface "tapcf2a0e68-6b" + type: internal + Port br-int + Interface br-int + type: internal + Port "qg-19f6d85f-a6" + tag: 1 + Interface "qg-19f6d85f-a6" + type: internal + ovs_version: "2.0.0" + + +Compute +--- +root@compute:~# ovs-vsctl show +99d365d2-f74e-40a8-b9a0-5bb60353675d + Bridge br-int + Port br-int + Interface br-int + type: internal + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port "tapc2db0bfa-ae" + tag: 1 + Interface "tapc2db0bfa-ae" + Port "tap57fae225-16" + tag: 2 + Interface "tap57fae225-16" + Port "int-br-eth0" + Interface "int-br-eth0" + Bridge "br-eth0" + Port "eth0" + Interface "eth0" + Port "phy-br-eth0" + Interface "phy-br-eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + Bridge br-tun + Port br-tun + Interface br-tun + type: internal + Port "gre-1" + Interface "gre-1" + type: gre + options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + ovs_version: "2.0.0" + + +References +---------- +http:// developer.rackspace.com/blog/neutron-networking-l3-agent.html \ No newline at end of file diff --git a/meta-openstack/Documentation/README.networking_vlan b/meta-openstack/Documentation/README.networking_vlan new file mode 100644 index 0000000..6d48e2b --- /dev/null +++ b/meta-openstack/Documentation/README.networking_vlan @@ -0,0 +1,382 @@ +Networking - VLAN network +========================= + +Description +----------- +The vlan network will have the VMs on one of two vlan networks +(DMZ_SUBNET - 172.16.0.0/24, INSIDE_SUBNET - 192.168.100.0/241). We +will continue to use the management network (192.168.7.0/24) for +controller/compute communications. The dhcp-agent will provide the VMs +addresses within each subnet and within its provisioned ranges. This +type of network is more typical of a deployed network since network +traffic can be contained to within the assigned vlan. + + +Assumptions +----------- +It is assumed you have completed the steps described in +README.networking and have provisioned the host vSwitch as well as +created the br-eth0 bridges on the controller and compute nodes. + +At this point you should be able to ping 192.168.7.4 from 192.168.7.4 +and vise versa. + +You have built your controller image including the cirros image (for +which you have already added the image to glance as myFirstImage). + +You have run 'source /etc/nova/openrc' + +Configuration updates +--------------------- +On the host Open vSwitch add an IP for 192.168.100.1/22 +sudo ip address add 192.168.100.1/24 broadcast 192.168.100.255 dev br-int +sudo ip address add 172.16.0.1/24 broadcast 172.16.0.255 dev br-int + +On the controller and (all) compute nodes you must edit the file +/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini + +In the [OVS] section set +network_vlan_ranges = ph-eth0:200:200,ph-eth0:300:300 +bridge_mappings = ph-eth0:br-eth0 + +(*** on compute nodes edit local_ip as well [192.168.7.4]***) + +Restart some services to allow these changes to take effect: +/etc/init.d/neutron-openvswitch-agent reload +(on controller) +/etc/init.d/neutron-server reload +/etc/init.d/neutron-dhcp-agent reload +(on compute) +/etc/init.d/nova-compute reload + + +Create the net and subnet +------------------------- +neutron net-create --provider:physical_network=ph-eth0 \ + --provider:network_type=vlan --provider:segmentation_id=200 \ + --shared INSIDE_NET +Created a new network: ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | 587e29d0-eb89-4c0d-948b-845009380097 | +| name | INSIDE_NET | +| provider:network_type | vlan | +| provider:physical_network | ph-eth0 | +| provider:segmentation_id | 200 | +| shared | True | +| status | ACTIVE | +| subnets | | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++---------------------------+--------------------------------------+ + +neutron net-create --provider:physical_network=ph-eth0 \ + --provider:network_type=vlan --provider:segmentation_id=300 \ + --shared DMZ_NET +Created a new network: ++---------------------------+--------------------------------------+ +| Field | Value | ++---------------------------+--------------------------------------+ +| admin_state_up | True | +| id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a | +| name | DMZ_NET | +| provider:network_type | vlan | +| provider:physical_network | ph-eth0 | +| provider:segmentation_id | 300 | +| shared | True | +| status | ACTIVE | +| subnets | | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++---------------------------+--------------------------------------+ + +neutron subnet-create INSIDE_NET 192.168.100.0/24 \ + --name INSIDE_SUBNET --no-gateway \ + --host-route destination=0.0.0.0/0,nexthop=192.168.100.1 \ + --allocation-pool start=192.168.100.100,end=192.168.100.199 +Created a new subnet: ++------------------+----------------------------------------------------------+ +| Field | Value | ++------------------+----------------------------------------------------------+ +| allocation_pools | {"start": "192.168.100.100", "end": "192.168.100.199"} | +| cidr | 192.168.100.0/24 | +| dns_nameservers | | +| enable_dhcp | True | +| gateway_ip | | +| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.100.1"} | +| id | 2c1a77aa-614c-4a97-9855-a62bb4b4d899 | +| ip_version | 4 | +| name | INSIDE_SUBNET | +| network_id | 587e29d0-eb89-4c0d-948b-845009380097 | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++------------------+----------------------------------------------------------+ + +neutron subnet-create DMZ_NET 172.16.0.0/24 --name DMZ_SUBNET \ + --no-gateway --host-route destination=0.0.0.0/0,nexthop=172.16.0.1 \ + --allocation-pool start=172.16.0.100,end=172.16.0.199 +Created a new subnet: ++------------------+-------------------------------------------------------+ +| Field | Value | ++------------------+-------------------------------------------------------+ +| allocation_pools | {"start": "172.16.0.100", "end": "172.16.0.199"} | +| cidr | 172.16.0.0/24 | +| dns_nameservers | | +| enable_dhcp | True | +| gateway_ip | | +| host_routes | {"destination": "0.0.0.0/0", "nexthop": "172.16.0.1"} | +| id | bfae1a19-e15f-4e5e-94f2-018f24abbc2e | +| ip_version | 4 | +| name | DMZ_SUBNET | +| network_id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | ++------------------+-------------------------------------------------------+ + + +Boot the image and test connectivity +------------------------------------ +(note with our current config you might only be able to run 2 instances at + any one time, so you will end up juggling them to test connectivity) + +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-00000009 | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | 7itgDwsdY8d4 | +| config_drive | | +| created | 2014-04-10T14:31:21Z | +| flavor | m1.small (2) | +| hostId | | +| id | 630affe0-d497-4211-87bb-383254d60428 | +| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | +| key_name | - | +| metadata | {} | +| name | INSIDE_INSTANCE | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T14:31:21Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE2 ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-0000000a | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | BF9p6tftS2xJ | +| config_drive | | +| created | 2014-04-10T14:32:07Z | +| flavor | m1.small (2) | +| hostId | | +| id | ff94ee07-ae24-4785-9d51-26de2c23da60 | +| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | +| key_name | - | +| metadata | {} | +| name | INSIDE_INSTANCE2 | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T14:32:08Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +root@controller:~# nova list ++--------------------------------------+------------------+--------+------------+-------------+----------------------------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------------+--------+------------+-------------+----------------------------+ +| 630affe0-d497-4211-87bb-383254d60428 | INSIDE_INSTANCE | ACTIVE | - | Running | INSIDE_NET=192.168.100.100 | +| ff94ee07-ae24-4785-9d51-26de2c23da60 | INSIDE_INSTANCE2 | ACTIVE | - | Running | INSIDE_NET=192.168.100.102 | ++--------------------------------------+------------------+--------+------------+-------------+----------------------------+ + +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE ++--------------------------------------+-----------------------------------------------------+ +| Property | Value | ++--------------------------------------+-----------------------------------------------------+ +| OS-DCF:diskConfig | MANUAL | +| OS-EXT-AZ:availability_zone | nova | +| OS-EXT-SRV-ATTR:host | - | +| OS-EXT-SRV-ATTR:hypervisor_hostname | - | +| OS-EXT-SRV-ATTR:instance_name | instance-0000000d | +| OS-EXT-STS:power_state | 0 | +| OS-EXT-STS:task_state | scheduling | +| OS-EXT-STS:vm_state | building | +| OS-SRV-USG:launched_at | - | +| OS-SRV-USG:terminated_at | - | +| accessIPv4 | | +| accessIPv6 | | +| adminPass | SvzSpnmB6mXJ | +| config_drive | | +| created | 2014-04-10T14:42:53Z | +| flavor | m1.small (2) | +| hostId | | +| id | 0dab2712-5f1d-4559-bfa4-d09c6304418c | +| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | +| key_name | - | +| metadata | {} | +| name | DMZ_INSTANCE | +| os-extended-volumes:volumes_attached | [] | +| progress | 0 | +| security_groups | default | +| status | BUILD | +| tenant_id | b5890ba3fb234347ae317ca2f8358663 | +| updated | 2014-04-10T14:42:54Z | +| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | ++--------------------------------------+-----------------------------------------------------+ + +nova boot --flavor=m1.small --image=myFirstImage \ + --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE2 +... + +nova console-log INSIDE_INSTANCE2 +--- +...skip +Starting network... +udhcpc (v1.18.5) started +Sending discover... +Sending select for 192.168.100.102... +...skip + +ping +--- + +You should also be able to ping instances on the same subnet but not +those on the other subnet. The controller and compute can not ping +instances on either network (with metadata implemented the controller +should be able to, but currently the metadata agent is not available.) + +dump-flows +---------- +(note the 'vlan' tags) +root@compute:~# ovs-ofctl dump-flows br-int +NXST_FLOW reply (xid=0x4): + cookie=0x0, duration=1640.378s, table=0, n_packets=3, n_bytes=788, idle_age=1628, priority=3,in_port=6,dl_vlan=300 actions=mod_vlan_vid:2,NORMAL + cookie=0x0, duration=2332.714s, table=0, n_packets=6, n_bytes=1588, idle_age=2274, priority=3,in_port=6,dl_vlan=200 actions=mod_vlan_vid:1,NORMAL + cookie=0x0, duration=2837.737s, table=0, n_packets=22, n_bytes=1772, idle_age=1663, priority=2,in_port=6 actions=drop + cookie=0x0, duration=2837.976s, table=0, n_packets=53, n_bytes=5038, idle_age=1535, priority=1 actions=NORMAL + + + +The final Open vSwitch configs +------------------------------ + +Controller +--- +root@controller:~# ovs-vsctl show +524a6c84-226d-427b-8efa-732ed7e7fa43 + Bridge br-tun + Port "gre-2" + Interface "gre-2" + type: gre + options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} + Port br-tun + Interface br-tun + type: internal + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + Bridge "br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + Port "phy-br-eth0" + Interface "phy-br-eth0" + Bridge br-int + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port "tapafbbdd15-e7" + tag: 1 + Interface "tapafbbdd15-e7" + type: internal + Port "int-br-eth0" + Interface "int-br-eth0" + Port "tapa50c1a18-34" + tag: 2 + Interface "tapa50c1a18-34" + type: internal + Port br-int + Interface br-int + type: internal + ovs_version: "2.0.0" + + +Compute +--- +root@compute:~# ovs-vsctl show +99d365d2-f74e-40a8-b9a0-5bb60353675d + Bridge br-tun + Port br-tun + Interface br-tun + type: internal + Port "gre-1" + Interface "gre-1" + type: gre + options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} + Port patch-int + Interface patch-int + type: patch + options: {peer=patch-tun} + Bridge br-int + Port br-int + Interface br-int + type: internal + Port "int-br-eth0" + Interface "int-br-eth0" + Port patch-tun + Interface patch-tun + type: patch + options: {peer=patch-int} + Port "tap78e1ac37-6c" + tag: 2 + Interface "tap78e1ac37-6c" + Port "tap315398a4-cd" + tag: 1 + Interface "tap315398a4-cd" + Bridge "br-eth0" + Port "phy-br-eth0" + Interface "phy-br-eth0" + Port "eth0" + Interface "eth0" + Port "br-eth0" + Interface "br-eth0" + type: internal + ovs_version: "2.0.0" + + +References +---------- +http://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks.html \ No newline at end of file diff --git a/meta-openstack/Documentation/README.spice b/meta-openstack/Documentation/README.spice new file mode 100644 index 0000000..a6b93b2 --- /dev/null +++ b/meta-openstack/Documentation/README.spice @@ -0,0 +1,82 @@ +OpenStack offers two types of console support, VNC support and SPICE. +The VNC protocol is fairly limited, lacking support for multiple monitors, +bi-directional audio, reliable cut+paste, video streaming and more. +SPICE is a new protocol which aims to address all the limitations in VNC, +to provide good remote desktop support. + +The Controller will have both the proxy for vnc and for spice html5 +running. The nova-spicehtml5proxy service communicates directly with +the hypervisor process using SPICE. + +OpenStack's Dashboard uses a SPICE HTML5 widget in its console tab +to communicate with the nova-spicehtml5proxy service. Since both proxies +are running, the Dashboard will automatically attempt to connect to +whichever console is provided by the compute node. + +Another way to access the spice console is from the controller, +run the following command: + + nova get-spice-console myinstance spice-html5 + +This will give you an URL which will directly give you access to the console +(instead of from Horizon). + +The enable or disable VNC/SPICE, on the compute node, modify +/etc/nova/nova.conf. + +Options for configuring SPICE as the console for OpenStack Compute can be + found below. + +--------------------------------------------------------------------------------- + Configuration option=Default value (Type) Description + + agent_enabled=True (BoolOpt)enable spice guest agent support + + enabled=False (BoolOpt)enable spice related features + + html5proxy_base_url=http://127.0.0.1:6080/spice_auto.html + (StrOpt)location of spice html5 + console proxy, in the form + "http://127.0.0.1:6080/spice_auto.html" + + keymap=en-us (StrOpt)keymap for spice + + server_listen=127.0.0.1 (StrOpt)IP address on which instance + spice + server should listen + + server_proxyclient_address=127.0.0.1 (StrOpt)the address to which proxy + clients (like nova-spicehtml5proxy) + should connect +--------------------------------------------------------------------------------- + +Combinations/behaviour from Compute: + +1. VNC will be provided + +vnc_enabled=True +enabled=True +agent_enabled=True + +2. SPICE will be provided + +vnc_enabled=False +enabled=True +agent_enabled=True + +3. VNC will be provided + +vnc_enabled=True +enabled=False +agent_enabled=False + +4. No console will be provided + +vnc_enabled=False +enabled=False +agent_enabled=False + +After nova.conf is changed on the compute node, restart nova-compute +service. If an instance was running beforehand, it will be necessary to +restart (reboot, soft or hard) the instance to get the new console. + diff --git a/meta-openstack/Documentation/README.tempest b/meta-openstack/Documentation/README.tempest new file mode 100644 index 0000000..884a28a --- /dev/null +++ b/meta-openstack/Documentation/README.tempest @@ -0,0 +1,55 @@ +# enable in local.conf via: +OPENSTACK_CONTROLLER_EXTRA_INSTALL += "tempest keystone-tests glance-tests cinder-tests \ + horizon-tests heat-tests neutron-tests nova-tests ceilometer-tests" + +# For the tempest built-in tests: +--------------------------------- + # edit /etc/tempest/tempest.conf to suit details of the system + % cd /usr/lib/python2.7/site-packages + % nosetests --verbose tempest/api + +OR (less reliable) + + % cd /usr/lib/python2.7/site-packages + % cp /etc/tempest/.testr.conf . + % testr init + % testr run --parallel tempest + +# For individual package tests +------------------------------ +# typical: + % cd /usr/lib/python2.7/site-packages/ + % /etc//run_tests.sh --verbose -N + +# Cinder: +# Notes: tries to run setup.py, --debug works around part of the issue + % cd /usr/lib/python2.7/site-packages/ + % nosetests --verbose cinder/tests + +# Neutron: +# Notes: use nosetests directly + % cd /usr/lib/python2.7/site-packages/ + % nosetests --verbose neutron/tests + +# Nova: +# Notes: vi /usr/lib/python2.7/site-packages/nova/tests/conf_fixture.py +# modify api-paste.ini reference to be /etc/nova/api-paste.ini, the conf +# file isn't being read properly, so some tests will fail to run + % cd / + % nosetests --verbose /usr/lib/python2.7/site-packages/nova/tests + +# keystone: +# + +# Other Notes: +-------------- + + 1) testr: not so good, can be missing, some tools are. use nostests directly + instead. + 2) all run_tests.sh are provided, even though they are similar + + + + + + diff --git a/meta-openstack/README.networking b/meta-openstack/README.networking deleted file mode 100644 index 2299de3..0000000 --- a/meta-openstack/README.networking +++ /dev/null @@ -1,208 +0,0 @@ -Networking -============== - -Description ------------ -OpenStack provides tools to setup many different network topologies using -tunnels, Vlans, GREs... the list goes on. In this document we describe how to -setup 3 basic network configurations which can be used as building blocks for a -larger network deployment. Going through these setups also tests that the -Open vSwitch plugin and DHCP and l3 agents are operating correctly. - - -Assumptions ------------ -The following assumes you have built the controller and compute nodes for the -qemux86-64 machine as described in README.setup and have been able to spin-up an -instance successfully. - - -Prerequisites -------------- - -1. Following the instructions in README.setup to spin-up your controller and -compute nodes in VMs will result in NATed tap interfaces on the host. While this -is fine for basic use it will not allow you to use things like GRE tunnels as -the packet will appear to be coming from the host when it arrives at the other -end of the tunnel and will therefore be rejected (since the src IP will not -match the GRE's remote_ip). To get around this we must setup an Open vSwitch -bridge on the host and attach the taps. Open vSwitch must therefore be installed -and running on the host. - -On Ubuntu systems this may be done via: -sudo apt-get install openvswitch-switch openvswitch-common - -2. Also since we will be using an Open vSwitch on the host we need to ensure the -controller and compute network interfaces have different MAC addresses. We -therefor must modify the runqemu script as per the following: - ---- a/scripts/runqemu-internal -+++ b/scripts/runqemu-internal -@@ -252,7 +252,7 @@ else - KERNEL_NETWORK_CMD="ip=192.168.7.$n2::192.168.7.$n1:255.255.255.0" - QEMU_TAP_CMD="-net tap,vlan=0,ifname=$TAP,script=no,downscript=no" - if [ "$KVM_ACTIVE" = "yes" ]; then -- QEMU_NETWORK_CMD="-net nic,model=virtio $QEMU_TAP_CMD,vhost=on" -+ QEMU_NETWORK_CMD="-net nic,macaddr=52:54:00:12:34:$(printf '%x' $((RANDOM % 170))),model=virtio $QEMU_TAP_CMD,vhost=on" - DROOT="/dev/vda" - ROOTFS_OPTIONS="-drive file=$ROOTFS,if=virtio" - else ---- -this will not guarantee distinct MAC addresses but most of the time they will be. - - -Host Open vSwitch bridge ------------------------- -As per the prerequisites we need to setup a bridge on the host to avoid NATed -tap interfaces. After you have used 'runqemu' to boot your controller and -compute nodes perform the following instructions on your host - -(I will assume tap0 - controller, tap1 - compute, use 'ip a s' or 'ifconfig' to -identify the tap interfaces) - -sudo ovs-vsctl add-br br-int -sudo ovs-vsctl add-port br-int tap0 -sudo ovs-vsctl add-port br-int tap1 -sudo ip address del 192.168.7.1/24 dev tap0 -sudo ip address del 192.168.7.3/24 dev tap1 -sudo ip address add 192.168.7.1/24 broadcast 192.168.7.255 dev br-int -sudo route del 192.168.7.2 tap0 -sudo route del 192.168.7.4 tap1 - - -NOTE: Any time you reboot the controller or compute nodes you will -want to remove and re-add the port via: -# ovs-vsctl del-port br-int tapX -# ovs-vsctl add-port br-int tapX -# ip address del 192.168.7.Y/24 dev tapX -(where X and Y are substituted accordingly) -This will also ensure the ARP tables in the vSwitch are updated since -chances are the MAC address will have changed on a reboot due to the -MAC randomizer of prerequisite 2. - - -Controller/Compute network setup --------------------------------- -The neutron Open vSwitch plugin expects several bridges to exist on -the controller and compute nodes. When the controller and compute -nodes are first booted however these do not exist and depending on how -you are setting up your network this is subject to change and as such -is not 'baked' in to our images. This would normally be setup by -cloud-init, chef, cobbler or some other deployment scripts. Here we -will accomplish it by hand. - -On first boot your network will look like this: (controller node) ----snip--- -root@controller:~# ip a show eth0 -2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 - link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff - inet 192.168.7.2/24 brd 192.168.7.255 scope global eth0 - valid_lft forever preferred_lft forever - inet6 fe80::5054:ff:fe12:34a9/64 scope link - valid_lft forever preferred_lft forever - -root@controller:~# ovs-vsctl show -524a6c84-226d-427b-8efa-732ed7e7fa43 - Bridge br-int - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port br-int - Interface br-int - type: internal - Bridge br-tun - Port br-tun - Interface br-tun - type: internal - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - ovs_version: "2.0.0" ----snip--- - -To complete the expected network configuration you must add a bridge -which will contain the physical interface as one of its ports and move -the IP address from the interface to the bridge. The following will -accomplish this: - -ovs-vsctl add-br br-eth0 -ovs-vsctl add-port br-eth0 eth0 -ip address del 192.168.7.2/24 dev eth0 -ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0 -route add default gw 192.168.7.1 - -And now you network will look like the following: ----snip--- -root@controller:~# ip a s -...skip -2: eth0: mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000 - link/ether 52:54:00:12:34:a9 brd ff:ff:ff:ff:ff:ff -...skip -7: br-eth0: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether ae:f8:be:7c:78:42 brd ff:ff:ff:ff:ff:ff - inet 192.168.7.2/24 scope global br-eth0 - valid_lft forever preferred_lft forever - inet6 fe80::e453:1fff:fec1:79ff/64 scope link - valid_lft forever preferred_lft forever - -root@controller:~# ovs-vsctl show -524a6c84-226d-427b-8efa-732ed7e7fa43 - Bridge "br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - Bridge br-int - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port br-int - Interface br-int - type: internal - Bridge br-tun - Port br-tun - Interface br-tun - type: internal - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - ovs_version: "2.0.0" - -At this point you will want to restart the neutron network services - -(controller) -/etc/init.d/neutron-openvswitch-agent stop -/etc/init.d/neutron-dhcp-agent stop -/etc/init.d/neutron-server reload -/etc/init.d/neutron-dhcp-agent start -/etc/init.d/neutron-openvswitch-agent start - -(Compute) -/etc/init.d/neutron-openvswitch-agent stop -/etc/init.d/nova-compute reload -/etc/init.d/neutron-openvswitch-agent start - - -NOTE: on a reboot the Open vSwitch configuration will remain but at -this point in time you will need to manually move the IP address from -the eth0 interface to the br-eth0 interface using - -ip address del 192.168.7.2/24 dev eth0 -ip address add 192.168.7.2/24 broadcast 192.168.7.255 dev br-eth0 - -With this network configuration on the controller and similar -configuration on the compute node (just replace 192.168.7.2 with -192.168.7.4) everything is ready to configure any of the 3 network -sample configurations. - -Further reading ---------------- - -README.networking_flat -README.networking_vlan -README.networking_l3_router \ No newline at end of file diff --git a/meta-openstack/README.networking_flat b/meta-openstack/README.networking_flat deleted file mode 100644 index ab18f6f..0000000 --- a/meta-openstack/README.networking_flat +++ /dev/null @@ -1,249 +0,0 @@ -Networking - FLAT network -========================= - -Description ------------ -The flat network will have the VMs share the management network -(192.168.7.0/24). The dhcp-agent will provide the VMs addresses -within the subnet and within its provisioned range. This type of -network will not typically be deployed as everything is accessible by -everything else (VMs can access VMs and the compute and controller -nodes) - - -Assumptions ------------ -It is assumed you have completed the steps described in -README.networking and have provisioned the host vSwitch as well as -created the br-eth0 bridges on the controller and compute nodes. - -At this point you should be able to ping 192.168.7.4 from 192.168.7.4 -and vise versa. - -You have built your controller image including the cirros image (for -which you have already added the image to glance as myFirstImage). - -You have run 'source /etc/nova/openrc' - -Configuration updates ---------------------- -On the controller and (all) compute nodes you must edit the file -/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - -In the [OVS] section set -network_vlan_ranges = ph-eth0:1:1 -bridge_mappings = ph-eth0:br-eth0 - -(*** on compute nodes edit local_ip as well [192.168.7.4]***) - -Restart some services to allow these changes to take effect: -/etc/init.d/neutron-openvswitch-agent reload -(on controller) -/etc/init.d/neutron-server reload -/etc/init.d/neutron-dhcp-agent reload -(on compute) -/etc/init.d/nova-compute reload - - -Create the net and subnet -------------------------- -neutron net-create --provider:physical_network=ph-eth0 \ - --provider:network_type=flat \ - --shared MY_FLAT_NET -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 | -| name | MY_FLAT_NET | -| provider:network_type | flat | -| provider:physical_network | ph-eth0 | -| provider:segmentation_id | | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+---------------------------+--------------------------------------+ - - -neutron subnet-create MY_FLAT_NET 192.168.7.0/24 --name MY_FLAT_SUBNET \ - --no-gateway --host-route destination=0.0.0.0/0,nexthop=192.168.7.1 \ - --allocation-pool start=192.168.7.230,end=192.168.7.234 -Created a new subnet: -+------------------+--------------------------------------------------------+ -| Field | Value | -+------------------+--------------------------------------------------------+ -| allocation_pools | {"start": "192.168.7.230", "end": "192.168.7.234"} | -| cidr | 192.168.7.0/24 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | | -| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.7.1"} | -| id | bfa99d99-2ba5-47e9-b71e-0bd8a2961e08 | -| ip_version | 4 | -| name | MY_FLAT_SUBNET | -| network_id | 3263aa7f-b86c-4ad3-a28c-c78d4c711583 | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+------------------+--------------------------------------------------------+ - -Boot the image and test connectivity ------------------------------------- -nova boot --image myFirstImage --flavor m1.small \ - --nic net-id=3263aa7f-b86c-4ad3-a28c-c78d4c711583 myinstance -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-00000003 | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | 7Qe9nFekCjYD | -| config_drive | | -| created | 2014-04-10T04:13:38Z | -| flavor | m1.small (2) | -| hostId | | -| id | f85da1da-c318-49fb-8da9-c07644400d4c | -| image | myFirstImage (1da089b1-164d-45d6-9b6c-002f3edb8a7b) | -| key_name | - | -| metadata | {} | -| name | myinstance | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T04:13:38Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -nova list -+--------------------------------------+------------+--------+------------+-------------+---------------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+------------+--------+------------+-------------+---------------------------+ -| f85da1da-c318-49fb-8da9-c07644400d4c | myinstance | ACTIVE | - | Running | MY_FLAT_NET=192.168.7.231 | -+--------------------------------------+------------+--------+------------+-------------+---------------------------+ - -nova console-log myinstance ---- -...skip -Starting logging: OK -Initializing random number generator... done. -Starting network... -udhcpc (v1.18.5) started -Sending discover... -Sending select for 192.168.7.231... -Lease of 192.168.7.231 obtained, lease time 86400 -deleting routers -...skip - -ping ---- -root@controller:~# ping -c 1 192.168.7.231 -PING 192.168.7.231 (192.168.7.231) 56(84) bytes of data. -64 bytes from 192.168.7.231: icmp_seq=1 ttl=64 time=2.98 ms - ---- 192.168.7.231 ping statistics --- -1 packets transmitted, 1 received, 0% packet loss, time 0ms -rtt min/avg/max/mdev = 2.988/2.988/2.988/0.000 ms - -You should also be able to ping the compute or controller or other VMs -(if you start them) from within a VM. Pinging targets outside the -subnet requires that you ensure the various interfaces, such as eth0 -have promisc on 'ip link set eth0 promisc on' - -The final Open vSwitch configs ------------------------------- - -Controller ---- -root@controller:~# ovs-vsctl show -524a6c84-226d-427b-8efa-732ed7e7fa43 - Bridge "br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - Port "phy-br-eth0" - Interface "phy-br-eth0" - Bridge br-int - Port "tap549fb0c7-1a" - tag: 1 - Interface "tap549fb0c7-1a" - type: internal - Port "int-br-eth0" - Interface "int-br-eth0" - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port br-int - Interface br-int - type: internal - Bridge br-tun - Port "gre-2" - Interface "gre-2" - type: gre - options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} - Port br-tun - Interface br-tun - type: internal - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - ovs_version: "2.0.0" - - -Compute ---- -root@compute:~# ovs-vsctl show -99d365d2-f74e-40a8-b9a0-5bb60353675d - Bridge br-tun - Port "gre-1" - Interface "gre-1" - type: gre - options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} - Port br-tun - Interface br-tun - type: internal - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - Bridge br-int - Port br-int - Interface br-int - type: internal - Port "int-br-eth0" - Interface "int-br-eth0" - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port "tap93a74250-ef" - tag: 1 - Interface "tap93a74250-ef" - Bridge "br-eth0" - Port "phy-br-eth0" - Interface "phy-br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - ovs_version: "2.0.0" - - -References ----------- -http://developer.rackspace.com/blog/neutron-networking-simple-flat-network.html \ No newline at end of file diff --git a/meta-openstack/README.networking_l3_router b/meta-openstack/README.networking_l3_router deleted file mode 100644 index a16f8c4..0000000 --- a/meta-openstack/README.networking_l3_router +++ /dev/null @@ -1,450 +0,0 @@ -Networking - l3 router -========================= - -Description ------------ -Using provider networks (such as we did for flat and vlan usecases) -does not scale to large deployments, their downsides become quickly -apparent. The l3-agent provides the ability to create routers that can -handle routing between directly connected LAN interfaces and a single -WAN interface. - -Here we setup a virtual router with a connection to a provider network -(vlan) and 2 attached subnets. We don't use floating IPs for this -demo. - - -Assumptions ------------ -It is assumed you have completed the steps described in -README.networking and have provisioned the host vSwitch as well as -created the br-eth0 bridges on the controller and compute nodes. - -At this point you should be able to ping 192.168.7.4 from 192.168.7.4 -and vise versa. - -You have built your controller image including the cirros image (for -which you have already added the image to glance as myFirstImage). - -You have run 'source /etc/nova/openrc' - -Configuration updates ---------------------- -On the host Open vSwitch add an IP for 192.168.100.1/22 -sudo ip address add 192.168.100.1/22 broadcast 192.168.255.255 dev br-int - -On the controller and (all) compute nodes you must edit the file -/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - -In the [OVS] section set -network_vlan_ranges = ph-eth0:1998:1998 -bridge_mappings = ph-eth0:br-eth0 - -(*** on compute nodes edit local_ip as well [192.168.7.4]***) - -Restart some services to allow these changes to take effect: -/etc/init.d/neutron-openvswitch-agent reload -(on controller) -/etc/init.d/neutron-server reload -/etc/init.d/neutron-dhcp-agent reload -(on compute) -/etc/init.d/nova-compute reload - - -** edit /etc/neutron/l3-agent.ini -use_namespaces = True -external_network_bridge = - -/etc/init.d/neutron-l3-agent restart - - -Create the provider network ---------------------------- -neutron net-create --provider:physical_network=ph-eth0 \ - --provider:network_type=vlan --provider:segmentation_id=1998 \ - --shared --router:external=true GATEWAY_NET - -neutron subnet-create GATEWAY_NET 192.168.100.0/22 \ - --name GATEWAY_SUBNET --gateway=192.168.100.1 \ - --allocation-pool start=192.168.101.1,end=192.168.103.254 - - -Create the router ------------------ -neutron router-create NEUTRON-ROUTER -Created a new router: -+-----------------------+--------------------------------------+ -| Field | Value | -+-----------------------+--------------------------------------+ -| admin_state_up | True | -| external_gateway_info | | -| id | b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 | -| name | NEUTRON-ROUTER | -| status | ACTIVE | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+-----------------------+--------------------------------------+ - -neutron router-gateway-set NEUTRON-ROUTER GATEWAY_NET -Set gateway for router NEUTRON-ROUTER - -Inspect the created network namespaces --------------------------------------- -root@controller:~# ip netns -qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 -qdhcp-498fa1f2-87de-4874-8ca9-f4ba3e394d2a - -ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a -1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever -2: sit0: mtu 1480 qdisc noop state DOWN group default - link/sit 0.0.0.0 brd 0.0.0.0 -20: qg-19f6d85f-a6: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff - inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:feb8:1e9d/64 scope link - valid_lft forever preferred_lft forever - - -Attach tenant networks ----------------------- -neutron net-create --provider:network_type=gre --provider:segmentation_id=10 \ - --shared APPS_NET -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 52f4549f-aeed-4fcf-997b-4349f591cd5f | -| name | APPS_NET | -| provider:network_type | gre | -| provider:physical_network | | -| provider:segmentation_id | 10 | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+---------------------------+--------------------------------------+ - -neutron net-create --provider:network_type=gre --provider:segmentation_id=20 \ - --shared DMZ_NET -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 | -| name | DMZ_NET | -| provider:network_type | gre | -| provider:physical_network | | -| provider:segmentation_id | 20 | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+---------------------------+--------------------------------------+ - -neutron subnet-create APPS_NET 10.241.0.0/22 --name APPS_SUBNET -Created a new subnet: -+------------------+------------------------------------------------+ -| Field | Value | -+------------------+------------------------------------------------+ -| allocation_pools | {"start": "10.241.0.2", "end": "10.241.3.254"} | -| cidr | 10.241.0.0/22 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | 10.241.0.1 | -| host_routes | | -| id | 45e7d887-1c4c-485a-9247-2a2bec9e3714 | -| ip_version | 4 | -| name | APPS_SUBNET | -| network_id | 52f4549f-aeed-4fcf-997b-4349f591cd5f | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+------------------+------------------------------------------------+ - -neutron subnet-create DMZ_NET 10.242.0.0/22 --name DMZ_SUBNET -Created a new subnet: -+------------------+------------------------------------------------+ -| Field | Value | -+------------------+------------------------------------------------+ -| allocation_pools | {"start": "10.242.0.2", "end": "10.242.3.254"} | -| cidr | 10.242.0.0/22 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | 10.242.0.1 | -| host_routes | | -| id | 2deda040-be04-432b-baa6-3a2219d22f20 | -| ip_version | 4 | -| name | DMZ_SUBNET | -| network_id | eeb07b09-4b4a-4c2c-9060-0b8e414a9279 | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+------------------+------------------------------------------------+ - -neutron router-interface-add NEUTRON-ROUTER APPS_SUBNET -Added interface 58f3db35-f5df-4fd1-9735-4ff13dd342de to router NEUTRON-ROUTER. - -neutron router-interface-add NEUTRON-ROUTER DMZ_SUBNET -Added interface 9252ec29-7aac-4550-821c-f910f10680cf to router NEUTRON-ROUTER. - -ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 ip a -1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - valid_lft forever preferred_lft forever - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever -2: sit0: mtu 1480 qdisc noop state DOWN group default - link/sit 0.0.0.0 brd 0.0.0.0 -20: qg-19f6d85f-a6: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:b8:1e:9d brd ff:ff:ff:ff:ff:ff - inet 192.168.101.1/22 brd 192.168.103.255 scope global qg-19f6d85f-a6 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:feb8:1e9d/64 scope link - valid_lft forever preferred_lft forever -21: qr-58f3db35-f5: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:76:ec:23 brd ff:ff:ff:ff:ff:ff - inet 10.241.0.1/22 brd 10.241.3.255 scope global qr-58f3db35-f5 - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fe76:ec23/64 scope link - valid_lft forever preferred_lft forever -22: qr-9252ec29-7a: mtu 1500 qdisc noqueue state UNKNOWN group default - link/ether fa:16:3e:fb:98:06 brd ff:ff:ff:ff:ff:ff - inet 10.242.0.1/22 brd 10.242.3.255 scope global qr-9252ec29-7a - valid_lft forever preferred_lft forever - inet6 fe80::f816:3eff:fefb:9806/64 scope link - valid_lft forever preferred_lft forever - -Note the two new interfaces. -1 connection to the provider network -2 connections to the subnets (1 to APPS_SUBNET, 1 to DMZ_SUBNET) - -Boot an instance ---------------- -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=52f4549f-aeed-4fcf-997b-4349f591cd5f APPS_INSTANCE -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-0000000e | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | jdLkr4i6ATvQ | -| config_drive | | -| created | 2014-04-10T16:27:31Z | -| flavor | m1.small (2) | -| hostId | | -| id | fc849bb9-54d3-4a9a-99a4-6346a6eef404 | -| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | -| key_name | - | -| metadata | {} | -| name | APPS_INSTANCE | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T16:27:31Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=eeb07b09-4b4a-4c2c-9060-0b8e414a9279 DMZ_INSTANCE -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-0000000f | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | 4d7UsUJhSpBd | -| config_drive | | -| created | 2014-04-10T16:29:25Z | -| flavor | m1.small (2) | -| hostId | | -| id | f281c349-d49c-4d6c-bf56-74f04f2e8aec | -| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | -| key_name | - | -| metadata | {} | -| name | DMZ_INSTANCE | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T16:29:25Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -Check connectivity ------------------- -nova console-log APPS_INSTANCE -...skip -Starting network... -udhcpc (v1.18.5) started -Sending discover... -Sending select for 10.241.0.2... -Lease of 10.241.0.2 obtained, lease time 86400 -..skip - -nova console-log DMZ_INSTANCE -...skip -Starting network... -udhcpc (v1.18.5) started -Sending discover... -Sending select for 10.242.0.2... -Lease of 10.242.0.2 obtained, lease time 86400 -...skip - -root@controller:~# nova list -+--------------------------------------+---------------+--------+------------+-------------+---------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+---------------+--------+------------+-------------+---------------------+ -| fc849bb9-54d3-4a9a-99a4-6346a6eef404 | APPS_INSTANCE | ACTIVE | - | Running | APPS_NET=10.241.0.2 | -| f281c349-d49c-4d6c-bf56-74f04f2e8aec | DMZ_INSTANCE | ACTIVE | - | Running | DMZ_NET=10.242.0.2 | -+--------------------------------------+---------------+--------+------------+-------------+---------------------+ - - -ping ---- -Since we are not using floating IPs you will only be able ping from inside the route namespace - -# ip netns exec qrouter-b27d1a20-8a31-46d5-bdef-32a5ccf4ec91 \ - ping 10.241.0.2 -c 1 -PING 10.241.0.2 (10.241.0.2) 56(84) bytes of data. -64 bytes from 10.241.0.2: icmp_seq=1 ttl=64 time=6.32 ms - ---- 10.241.0.2 ping statistics --- -1 packets transmitted, 1 received, 0% packet loss, time 0ms -rtt min/avg/max/mdev = 6.328/6.328/6.328/0.000 ms - -# ping 10.241.0.2 -c 1 -connect: Network is unreachable - - -The final Open vSwitch configs ------------------------------- - -Controller ---- -root@controller:~# ovs-vsctl show -524a6c84-226d-427b-8efa-732ed7e7fa43 - Bridge "br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - Port "phy-br-eth0" - Interface "phy-br-eth0" - Bridge br-tun - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - Port "gre-2" - Interface "gre-2" - type: gre - options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} - Port br-tun - Interface br-tun - type: internal - Bridge br-int - Port "qr-58f3db35-f5" - tag: 2 - Interface "qr-58f3db35-f5" - type: internal - Port "tap6e65f2e5-39" - tag: 3 - Interface "tap6e65f2e5-39" - type: internal - Port "qr-9252ec29-7a" - tag: 3 - Interface "qr-9252ec29-7a" - type: internal - Port "int-br-eth0" - Interface "int-br-eth0" - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port "tapcf2a0e68-6b" - tag: 2 - Interface "tapcf2a0e68-6b" - type: internal - Port br-int - Interface br-int - type: internal - Port "qg-19f6d85f-a6" - tag: 1 - Interface "qg-19f6d85f-a6" - type: internal - ovs_version: "2.0.0" - - -Compute ---- -root@compute:~# ovs-vsctl show -99d365d2-f74e-40a8-b9a0-5bb60353675d - Bridge br-int - Port br-int - Interface br-int - type: internal - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port "tapc2db0bfa-ae" - tag: 1 - Interface "tapc2db0bfa-ae" - Port "tap57fae225-16" - tag: 2 - Interface "tap57fae225-16" - Port "int-br-eth0" - Interface "int-br-eth0" - Bridge "br-eth0" - Port "eth0" - Interface "eth0" - Port "phy-br-eth0" - Interface "phy-br-eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - Bridge br-tun - Port br-tun - Interface br-tun - type: internal - Port "gre-1" - Interface "gre-1" - type: gre - options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - ovs_version: "2.0.0" - - -References ----------- -http:// developer.rackspace.com/blog/neutron-networking-l3-agent.html \ No newline at end of file diff --git a/meta-openstack/README.networking_vlan b/meta-openstack/README.networking_vlan deleted file mode 100644 index 6d48e2b..0000000 --- a/meta-openstack/README.networking_vlan +++ /dev/null @@ -1,382 +0,0 @@ -Networking - VLAN network -========================= - -Description ------------ -The vlan network will have the VMs on one of two vlan networks -(DMZ_SUBNET - 172.16.0.0/24, INSIDE_SUBNET - 192.168.100.0/241). We -will continue to use the management network (192.168.7.0/24) for -controller/compute communications. The dhcp-agent will provide the VMs -addresses within each subnet and within its provisioned ranges. This -type of network is more typical of a deployed network since network -traffic can be contained to within the assigned vlan. - - -Assumptions ------------ -It is assumed you have completed the steps described in -README.networking and have provisioned the host vSwitch as well as -created the br-eth0 bridges on the controller and compute nodes. - -At this point you should be able to ping 192.168.7.4 from 192.168.7.4 -and vise versa. - -You have built your controller image including the cirros image (for -which you have already added the image to glance as myFirstImage). - -You have run 'source /etc/nova/openrc' - -Configuration updates ---------------------- -On the host Open vSwitch add an IP for 192.168.100.1/22 -sudo ip address add 192.168.100.1/24 broadcast 192.168.100.255 dev br-int -sudo ip address add 172.16.0.1/24 broadcast 172.16.0.255 dev br-int - -On the controller and (all) compute nodes you must edit the file -/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - -In the [OVS] section set -network_vlan_ranges = ph-eth0:200:200,ph-eth0:300:300 -bridge_mappings = ph-eth0:br-eth0 - -(*** on compute nodes edit local_ip as well [192.168.7.4]***) - -Restart some services to allow these changes to take effect: -/etc/init.d/neutron-openvswitch-agent reload -(on controller) -/etc/init.d/neutron-server reload -/etc/init.d/neutron-dhcp-agent reload -(on compute) -/etc/init.d/nova-compute reload - - -Create the net and subnet -------------------------- -neutron net-create --provider:physical_network=ph-eth0 \ - --provider:network_type=vlan --provider:segmentation_id=200 \ - --shared INSIDE_NET -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 587e29d0-eb89-4c0d-948b-845009380097 | -| name | INSIDE_NET | -| provider:network_type | vlan | -| provider:physical_network | ph-eth0 | -| provider:segmentation_id | 200 | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+---------------------------+--------------------------------------+ - -neutron net-create --provider:physical_network=ph-eth0 \ - --provider:network_type=vlan --provider:segmentation_id=300 \ - --shared DMZ_NET -Created a new network: -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a | -| name | DMZ_NET | -| provider:network_type | vlan | -| provider:physical_network | ph-eth0 | -| provider:segmentation_id | 300 | -| shared | True | -| status | ACTIVE | -| subnets | | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+---------------------------+--------------------------------------+ - -neutron subnet-create INSIDE_NET 192.168.100.0/24 \ - --name INSIDE_SUBNET --no-gateway \ - --host-route destination=0.0.0.0/0,nexthop=192.168.100.1 \ - --allocation-pool start=192.168.100.100,end=192.168.100.199 -Created a new subnet: -+------------------+----------------------------------------------------------+ -| Field | Value | -+------------------+----------------------------------------------------------+ -| allocation_pools | {"start": "192.168.100.100", "end": "192.168.100.199"} | -| cidr | 192.168.100.0/24 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | | -| host_routes | {"destination": "0.0.0.0/0", "nexthop": "192.168.100.1"} | -| id | 2c1a77aa-614c-4a97-9855-a62bb4b4d899 | -| ip_version | 4 | -| name | INSIDE_SUBNET | -| network_id | 587e29d0-eb89-4c0d-948b-845009380097 | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+------------------+----------------------------------------------------------+ - -neutron subnet-create DMZ_NET 172.16.0.0/24 --name DMZ_SUBNET \ - --no-gateway --host-route destination=0.0.0.0/0,nexthop=172.16.0.1 \ - --allocation-pool start=172.16.0.100,end=172.16.0.199 -Created a new subnet: -+------------------+-------------------------------------------------------+ -| Field | Value | -+------------------+-------------------------------------------------------+ -| allocation_pools | {"start": "172.16.0.100", "end": "172.16.0.199"} | -| cidr | 172.16.0.0/24 | -| dns_nameservers | | -| enable_dhcp | True | -| gateway_ip | | -| host_routes | {"destination": "0.0.0.0/0", "nexthop": "172.16.0.1"} | -| id | bfae1a19-e15f-4e5e-94f2-018f24abbc2e | -| ip_version | 4 | -| name | DMZ_SUBNET | -| network_id | 498fa1f2-87de-4874-8ca9-f4ba3e394d2a | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -+------------------+-------------------------------------------------------+ - - -Boot the image and test connectivity ------------------------------------- -(note with our current config you might only be able to run 2 instances at - any one time, so you will end up juggling them to test connectivity) - -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-00000009 | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | 7itgDwsdY8d4 | -| config_drive | | -| created | 2014-04-10T14:31:21Z | -| flavor | m1.small (2) | -| hostId | | -| id | 630affe0-d497-4211-87bb-383254d60428 | -| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | -| key_name | - | -| metadata | {} | -| name | INSIDE_INSTANCE | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T14:31:21Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=587e29d0-eb89-4c0d-948b-845009380097 INSIDE_INSTANCE2 -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-0000000a | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | BF9p6tftS2xJ | -| config_drive | | -| created | 2014-04-10T14:32:07Z | -| flavor | m1.small (2) | -| hostId | | -| id | ff94ee07-ae24-4785-9d51-26de2c23da60 | -| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | -| key_name | - | -| metadata | {} | -| name | INSIDE_INSTANCE2 | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T14:32:08Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -root@controller:~# nova list -+--------------------------------------+------------------+--------+------------+-------------+----------------------------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+------------------+--------+------------+-------------+----------------------------+ -| 630affe0-d497-4211-87bb-383254d60428 | INSIDE_INSTANCE | ACTIVE | - | Running | INSIDE_NET=192.168.100.100 | -| ff94ee07-ae24-4785-9d51-26de2c23da60 | INSIDE_INSTANCE2 | ACTIVE | - | Running | INSIDE_NET=192.168.100.102 | -+--------------------------------------+------------------+--------+------------+-------------+----------------------------+ - -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE -+--------------------------------------+-----------------------------------------------------+ -| Property | Value | -+--------------------------------------+-----------------------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-AZ:availability_zone | nova | -| OS-EXT-SRV-ATTR:host | - | -| OS-EXT-SRV-ATTR:hypervisor_hostname | - | -| OS-EXT-SRV-ATTR:instance_name | instance-0000000d | -| OS-EXT-STS:power_state | 0 | -| OS-EXT-STS:task_state | scheduling | -| OS-EXT-STS:vm_state | building | -| OS-SRV-USG:launched_at | - | -| OS-SRV-USG:terminated_at | - | -| accessIPv4 | | -| accessIPv6 | | -| adminPass | SvzSpnmB6mXJ | -| config_drive | | -| created | 2014-04-10T14:42:53Z | -| flavor | m1.small (2) | -| hostId | | -| id | 0dab2712-5f1d-4559-bfa4-d09c6304418c | -| image | myFirstImage (f22d3ab8-96a5-46db-a029-7d59156c8e31) | -| key_name | - | -| metadata | {} | -| name | DMZ_INSTANCE | -| os-extended-volumes:volumes_attached | [] | -| progress | 0 | -| security_groups | default | -| status | BUILD | -| tenant_id | b5890ba3fb234347ae317ca2f8358663 | -| updated | 2014-04-10T14:42:54Z | -| user_id | 1dfcb72ef6a7428d8dd7300bc7f303d9 | -+--------------------------------------+-----------------------------------------------------+ - -nova boot --flavor=m1.small --image=myFirstImage \ - --nic net-id=498fa1f2-87de-4874-8ca9-f4ba3e394d2a DMZ_INSTANCE2 -... - -nova console-log INSIDE_INSTANCE2 ---- -...skip -Starting network... -udhcpc (v1.18.5) started -Sending discover... -Sending select for 192.168.100.102... -...skip - -ping ---- - -You should also be able to ping instances on the same subnet but not -those on the other subnet. The controller and compute can not ping -instances on either network (with metadata implemented the controller -should be able to, but currently the metadata agent is not available.) - -dump-flows ----------- -(note the 'vlan' tags) -root@compute:~# ovs-ofctl dump-flows br-int -NXST_FLOW reply (xid=0x4): - cookie=0x0, duration=1640.378s, table=0, n_packets=3, n_bytes=788, idle_age=1628, priority=3,in_port=6,dl_vlan=300 actions=mod_vlan_vid:2,NORMAL - cookie=0x0, duration=2332.714s, table=0, n_packets=6, n_bytes=1588, idle_age=2274, priority=3,in_port=6,dl_vlan=200 actions=mod_vlan_vid:1,NORMAL - cookie=0x0, duration=2837.737s, table=0, n_packets=22, n_bytes=1772, idle_age=1663, priority=2,in_port=6 actions=drop - cookie=0x0, duration=2837.976s, table=0, n_packets=53, n_bytes=5038, idle_age=1535, priority=1 actions=NORMAL - - - -The final Open vSwitch configs ------------------------------- - -Controller ---- -root@controller:~# ovs-vsctl show -524a6c84-226d-427b-8efa-732ed7e7fa43 - Bridge br-tun - Port "gre-2" - Interface "gre-2" - type: gre - options: {in_key=flow, local_ip="192.168.7.2", out_key=flow, remote_ip="192.168.7.4"} - Port br-tun - Interface br-tun - type: internal - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - Bridge "br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - Port "phy-br-eth0" - Interface "phy-br-eth0" - Bridge br-int - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port "tapafbbdd15-e7" - tag: 1 - Interface "tapafbbdd15-e7" - type: internal - Port "int-br-eth0" - Interface "int-br-eth0" - Port "tapa50c1a18-34" - tag: 2 - Interface "tapa50c1a18-34" - type: internal - Port br-int - Interface br-int - type: internal - ovs_version: "2.0.0" - - -Compute ---- -root@compute:~# ovs-vsctl show -99d365d2-f74e-40a8-b9a0-5bb60353675d - Bridge br-tun - Port br-tun - Interface br-tun - type: internal - Port "gre-1" - Interface "gre-1" - type: gre - options: {in_key=flow, local_ip="192.168.7.4", out_key=flow, remote_ip="192.168.7.2"} - Port patch-int - Interface patch-int - type: patch - options: {peer=patch-tun} - Bridge br-int - Port br-int - Interface br-int - type: internal - Port "int-br-eth0" - Interface "int-br-eth0" - Port patch-tun - Interface patch-tun - type: patch - options: {peer=patch-int} - Port "tap78e1ac37-6c" - tag: 2 - Interface "tap78e1ac37-6c" - Port "tap315398a4-cd" - tag: 1 - Interface "tap315398a4-cd" - Bridge "br-eth0" - Port "phy-br-eth0" - Interface "phy-br-eth0" - Port "eth0" - Interface "eth0" - Port "br-eth0" - Interface "br-eth0" - type: internal - ovs_version: "2.0.0" - - -References ----------- -http://developer.rackspace.com/blog/neutron-networking-vlan-provider-networks.html \ No newline at end of file diff --git a/meta-openstack/README.spice b/meta-openstack/README.spice deleted file mode 100644 index a6b93b2..0000000 --- a/meta-openstack/README.spice +++ /dev/null @@ -1,82 +0,0 @@ -OpenStack offers two types of console support, VNC support and SPICE. -The VNC protocol is fairly limited, lacking support for multiple monitors, -bi-directional audio, reliable cut+paste, video streaming and more. -SPICE is a new protocol which aims to address all the limitations in VNC, -to provide good remote desktop support. - -The Controller will have both the proxy for vnc and for spice html5 -running. The nova-spicehtml5proxy service communicates directly with -the hypervisor process using SPICE. - -OpenStack's Dashboard uses a SPICE HTML5 widget in its console tab -to communicate with the nova-spicehtml5proxy service. Since both proxies -are running, the Dashboard will automatically attempt to connect to -whichever console is provided by the compute node. - -Another way to access the spice console is from the controller, -run the following command: - - nova get-spice-console myinstance spice-html5 - -This will give you an URL which will directly give you access to the console -(instead of from Horizon). - -The enable or disable VNC/SPICE, on the compute node, modify -/etc/nova/nova.conf. - -Options for configuring SPICE as the console for OpenStack Compute can be - found below. - ---------------------------------------------------------------------------------- - Configuration option=Default value (Type) Description - - agent_enabled=True (BoolOpt)enable spice guest agent support - - enabled=False (BoolOpt)enable spice related features - - html5proxy_base_url=http://127.0.0.1:6080/spice_auto.html - (StrOpt)location of spice html5 - console proxy, in the form - "http://127.0.0.1:6080/spice_auto.html" - - keymap=en-us (StrOpt)keymap for spice - - server_listen=127.0.0.1 (StrOpt)IP address on which instance - spice - server should listen - - server_proxyclient_address=127.0.0.1 (StrOpt)the address to which proxy - clients (like nova-spicehtml5proxy) - should connect ---------------------------------------------------------------------------------- - -Combinations/behaviour from Compute: - -1. VNC will be provided - -vnc_enabled=True -enabled=True -agent_enabled=True - -2. SPICE will be provided - -vnc_enabled=False -enabled=True -agent_enabled=True - -3. VNC will be provided - -vnc_enabled=True -enabled=False -agent_enabled=False - -4. No console will be provided - -vnc_enabled=False -enabled=False -agent_enabled=False - -After nova.conf is changed on the compute node, restart nova-compute -service. If an instance was running beforehand, it will be necessary to -restart (reboot, soft or hard) the instance to get the new console. - diff --git a/meta-openstack/README.swift b/meta-openstack/README.swift deleted file mode 100644 index d7d63bf..0000000 --- a/meta-openstack/README.swift +++ /dev/null @@ -1,447 +0,0 @@ -Summary -======= - -This document is not intended to provide detail of how Swift in general -works, but rather it highlights the details of how Swift cluster is -setup and OpenStack is configured to allow various Openstack components -interact with Swift. - - -Swift Overview -============== - -Openstack Swift is an object storage service. Clients can access swift -objects through RESTful APIs. Swift objects can be grouped into a -"container" in which containers are grouped into "account". Each account -or container in Swift cluster is represented by a SQLite database which -contains information related to that account or container. Each -Swift object is just user data input. - -Swift cluster can include a massive number of storage devices. Any Swift -storage device can be configured to store container databases and/or -account databases and/or objects. Each Swift account database can be -identified by tuple (account name). Each Swift container database can -be identified by tuple (account name, container name). Each swift object -can be identified by tuple (account name, container name, object name). - -Swift uses "ring" static mapping algorithm to identify what storage device -hosting account database, container database, or object contain (similar -to Ceph uses Crush algorithm to identify what OSD hosting Ceph object). -A Swift cluster has 3 rings (account ring, container ring, and object ring) -used for finding location of account database, container database, or -object file respectively. - -Swift service includes the following core services: proxy-server which -provides the RESTful APIs for Swift clients to access; account-server -which manages accounts; container-server which manages containers; -and object-server which manages objects. - - -Swift Cluster Setup -=================== - -The Swift default cluster is setup to have the followings: - -* All Swift main process services including proxy-server, account-server - container-server, object-server run on Controller node. -* 3 zones in which each zone has only 1 storage device. - The underneath block devices for these 3 storage devices are loopback - devices. The size of the backing up loopback files is 2Gbytes by default - and can be changed at compile time through variable SWIFT_BACKING_FILE_SIZE. - If SWIFT_BACKING_FILE_SIZE="0G" then is for disabling loopback devices - and using local filesystem as Swift storage device. -* All 3 Swift rings have 2^12 partitions and 2 replicas. - -The Swift default cluster is mainly for demonstration purpose. One might -wants to have a different Swift cluster setup than this setup (e.g. using -real hardware block device instead of loopback devices). - -The script /etc/swift/swift_setup.sh is provided to ease the task of setting -up a complicated Swift cluster. It reads a cluster config file, which describes -what storage devices are included in what rings, and constructs the cluster. - -For details of how to use swift_setup.sh and the format of Swift cluster -config file please refer to the script's help: - - $ swift_setup.sh - - -Glance and Swift -================ - -Glance can store images into Swift cluster when "default_store = swift" -is set in /etc/glance/glance-api.conf. - -By default "default_store" has value of "file" which tells Glance to -store images into local filesystem. "default_store" value can be set -during compile time through variable GLANCE_DEFAULT_STORE. - -The following configuration options in /etc/glance/glance-api.conf affect -on how glance interacts with Swift cluster: - - swift_store_auth_version = 2 - swift_store_auth_address = http://127.0.0.1:5000/v2.0/ - swift_store_user = service:glance - swift_store_key = password - swift_store_container = glance - swift_store_create_container_on_put = True - swift_store_large_object_size = 5120 - swift_store_large_object_chunk_size = 200 - swift_enable_snet = False - -With these default settings, the images will be stored under -Swift account: "service" tenant ID and Swift cluster container: -"glance". - - -Cinder Backup and Swift -======================= - -Cinder-backup has ability to store volume backup into Swift -cluster with the following command: - - $ cinder backup-create - -where is ID of an existing Cinder volume if -the configure option "backup_driver" in /etc/cinder/cinder.conf -is set to "cinder.backup.drivers.ceph". - -Cinder-backup is not be able to create a backup for any cinder -volume which backed by NFS or Glusterfs. This is because NFS -and Gluster cinder-volume backend drivers do not support the -backup functionality. In other words, only cinder volume -backed by lvm-iscsi and ceph-rbd are able to be backed-up -by cinder-backup. - -The following configuration options in /etc/cinder/cinder.conf affect -on how cinder-backup interacts with Swift cluster: - - backup_swift_url=http://controller:8888/v1/AUTH_ - backup_swift_auth=per_user - #backup_swift_user= - #backup_swift_key= - backup_swift_container=cinder-backups - backup_swift_object_size=52428800 - backup_swift_retry_attempts=3 - backup_swift_retry_backoff=2 - backup_compression_algorithm=zlib - -With these defaults settings, the tenant ID of the keystone user that -runs "cinder backup-create" command will be used as Swift cluster -account name, along with "cinder-backups" Swift cluster container name -in which the volume backups will be saved into. - - -Build Configuration Options -=========================== - -* Controller build config options: - - --enable-board=intel-xeon-core \ - --enable-rootfs=ovp-openstack-controller \ - --enable-kernel=preempt-rt \ - --enable-addons=wr-ovp-openstack,wr-ovp \ - --with-template=feature/openstack-tests - -* Compute build config options: - - --enable-board=intel-xeon-core \ - --enable-rootfs=ovp-openstack-compute \ - --enable-kernel=preempt-rt \ - --enable-addons=wr-ovp-openstack,wr-ovp - - -Test Steps -========== - -This section describes test steps and expected results to demonstrate that -Swift is integrated properly into OpenStack. - -Please note: the following commands are carried on Controller node, unless -otherwise explicitly indicated. - - $ Start Controller and Compute node - $ . /etc/nova/openrc - $ dd if=/dev/urandom of=50M_c1.org bs=1M count=50 - $ dd if=/dev/urandom of=50M_c2.org bs=1M count=50 - $ dd if=/dev/urandom of=100M_c2.org bs=1M count=100 - $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org && swift upload c2 100M_c2.org - $ swift list - -c1 -c2 - - $ swift stat c1 - - Account: AUTH_4ebc0e00338f405c9267866c6b984e71 - Container: c1 - Objects: 1 - Bytes: 52428800 - Read ACL: - Write ACL: - Sync To: - Sync Key: - Accept-Ranges: bytes - X-Timestamp: 1396457818.76909 - X-Trans-Id: tx0564472425ad47128b378-00533c41bb - Content-Type: text/plain; charset=utf-8 - -(Should see there is 1 object) - - $ swift stat c2 - -root@controller:~# swift stat c2 - Account: AUTH_4ebc0e00338f405c9267866c6b984e71 - Container: c2 - Objects: 2 - Bytes: 157286400 - Read ACL: - Write ACL: - Sync To: - Sync Key: - Accept-Ranges: bytes - X-Timestamp: 1396457826.26262 - X-Trans-Id: tx312934d494a44bbe96a00-00533c41cd - Content-Type: text/plain; charset=utf-8 - -(Should see there are 2 objects) - - $ swift stat c3 - -Container 'c3' not found - - $ mv 50M_c1.org 50M_c1.save && mv 50M_c2.org 50M_c2.save && mv 100M_c2.org 100M_c2.save - $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org - $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org - -a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.save -a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.org -353233ed20418dbdeeb2fad91ba4c86a 50M_c2.save -353233ed20418dbdeeb2fad91ba4c86a 50M_c2.org -3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.save -3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.org - -(The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) - - $ swift delete c1 50M_c1.org && swift delete c2 50M_c2.org - $ swift stat c1 - - Account: AUTH_4ebc0e00338f405c9267866c6b984e71 - Container: c1 - Objects: 0 - Bytes: 0 - Read ACL: - Write ACL: - Sync To: - Sync Key: - Accept-Ranges: bytes - X-Timestamp: 1396457818.77284 - X-Trans-Id: tx58e4bb6d06b84276b8d7f-00533c424c - Content-Type: text/plain; charset=utf-8 - -(Should see there is no object) - - $ swift stat c2 - - Account: AUTH_4ebc0e00338f405c9267866c6b984e71 - Container: c2 - Objects: 1 - Bytes: 104857600 - Read ACL: - Write ACL: - Sync To: - Sync Key: - Accept-Ranges: bytes - X-Timestamp: 1396457826.25872 - X-Trans-Id: txdae8ab2adf4f47a4931ba-00533c425b - Content-Type: text/plain; charset=utf-8 - -(Should see there is 1 object) - - $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org - $ rm *.org - $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org - $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org - -31147c186e7dd2a4305026d3d6282861 50M_c1.save -31147c186e7dd2a4305026d3d6282861 50M_c1.org -b9043aacef436dfbb96c39499d54b850 50M_c2.save -b9043aacef436dfbb96c39499d54b850 50M_c2.org -b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.save -b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.org - -(The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) - - $ neutron net-create mynetwork - $ Modify "/etc/glance/glance-api.conf" to have "default_store = swift" - $ /etc/init.d/glance-api restart - $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img - $ glance image-list - -+--------------------------------------+--------------+-------------+------------------+---------+--------+ -| ID | Name | Disk Format | Container Format | Size | Status | -+--------------------------------------+--------------+-------------+------------------+---------+--------+ -| 79f52103-5b22-4aa5-8159-2d146b82b0b2 | myfirstimage | qcow2 | bare | 9761280 | active | -+--------------------------------------+--------------+-------------+------------------+---------+--------+ - - $ export OS_TENANT_NAME=service && export OS_USERNAME=glance - $ swift list glance - -79f52103-5b22-4aa5-8159-2d146b82b0b2 - -(The object name in the "glance" container must be the same as glance image id just created) - - $ swift download glance 79f52103-5b22-4aa5-8159-2d146b82b0b2 - $ md5sum 79f52103-5b22-4aa5-8159-2d146b82b0b2 /root/images/cirros-0.3.0-x86_64-disk.img - -50bdc35edb03a38d91b1b071afb20a3c 79f52103-5b22-4aa5-8159-2d146b82b0b2 -50bdc35edb03a38d91b1b071afb20a3c /root/images/cirros-0.3.0-x86_64-disk.img - -(The md5sum of these 2 files must be the same) - - $ ls /etc/glance/images/ -(This should be empty) - - $ . /etc/nova/openrc - $ nova boot --image myfirstimage --flavor 1 myinstance - $ nova list - -+--------------------------------------+------------+--------+------------+-------------+----------+ -| ID | Name | Status | Task State | Power State | Networks | -+--------------------------------------+------------+--------+------------+-------------+----------+ -| bc9662a0-0dac-4bff-a7fb-b820957c55a4 | myinstance | ACTIVE | - | Running | | -+--------------------------------------+------------+--------+------------+-------------+----------+ - - $ From dashboard, log into VM console to make sure the VM is really running - $ nova delete bc9662a0-0dac-4bff-a7fb-b820957c55a4 - $ glance image-delete 79f52103-5b22-4aa5-8159-2d146b82b0b2 - $ export OS_TENANT_NAME=service && export OS_USERNAME=glance - $ swift list glance - -(Should be empty) - - $ . /etc/nova/openrc && . /etc/cinder/add-cinder-volume-types.sh - $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 - $ cinder list - -+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ -| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | -+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ -| 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | lvm_vol_1 | 1 | lvm_iscsi | false | | -+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ - - $ cinder backup-create 3e388ae0-2e20-42a2-80da-3f9f366cbaed - $ cinder backup-list - -+--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ -| ID | Volume ID | Status | Name | Size | Object Count | Container | -+--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ -| 1444f5d0-3a87-40bc-a7a7-f3c672768b6a | 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | None | 1 | 22 | cinder-backups | -+--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ - - $ swift list - -c1 -c2 -cinder-backups - -(Should see new Swift container "cinder-backup") - - $ swift list cinder-backups - -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata - - $ reboot - $ . /etc/nova/openrc && swift list cinder-backups - -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 -volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata - - $ cinder backup-delete 1444f5d0-3a87-40bc-a7a7-f3c672768b6a - $ swift list cinder-backups - -(Should be empty) - - -Swift Built-In Unit Tests -========================= - -This section describes how to run Swift and Swift client built-in unit -tests which are located at: - - /usr/lib64/python2.7/site-packages/swift/test - /usr/lib64/python2.7/site-packages/swiftclient/tests - -with nosetests test-runner. Please make sure that the test accounts -setting in /etc/swift/test.conf reflects the keystone user accounts -setting. - -To run swift built-in unit test with nosetests: - - $ To accommodate the small space of loop dev, - modify /etc/swift/swift.conf to have "max_file_size = 5242880" - $ /etc/init.d/swift restart - $ cd /usr/lib64/python2.7/site-packages/swift - $ nosetests -v test - -Ran 1633 tests in 272.930s - -FAILED (errors=5, failures=4) - -To run swiftclient built-in unit test with nosetests: - - $ cd /usr/lib64/python2.7/site-packages/swiftclient - $ nosetests -v tests - -Ran 108 tests in 2.277s - -FAILED (failures=1) - - -References -========== - -* http://docs.openstack.org/developer/swift/deployment_guide.html -* http://docs.openstack.org/grizzly/openstack-compute/install/yum/content/ch_installing-openstack-object-storage.html -* https://swiftstack.com/openstack-swift/architecture/ diff --git a/meta-openstack/README.tempest b/meta-openstack/README.tempest deleted file mode 100644 index 884a28a..0000000 --- a/meta-openstack/README.tempest +++ /dev/null @@ -1,55 +0,0 @@ -# enable in local.conf via: -OPENSTACK_CONTROLLER_EXTRA_INSTALL += "tempest keystone-tests glance-tests cinder-tests \ - horizon-tests heat-tests neutron-tests nova-tests ceilometer-tests" - -# For the tempest built-in tests: ---------------------------------- - # edit /etc/tempest/tempest.conf to suit details of the system - % cd /usr/lib/python2.7/site-packages - % nosetests --verbose tempest/api - -OR (less reliable) - - % cd /usr/lib/python2.7/site-packages - % cp /etc/tempest/.testr.conf . - % testr init - % testr run --parallel tempest - -# For individual package tests ------------------------------- -# typical: - % cd /usr/lib/python2.7/site-packages/ - % /etc//run_tests.sh --verbose -N - -# Cinder: -# Notes: tries to run setup.py, --debug works around part of the issue - % cd /usr/lib/python2.7/site-packages/ - % nosetests --verbose cinder/tests - -# Neutron: -# Notes: use nosetests directly - % cd /usr/lib/python2.7/site-packages/ - % nosetests --verbose neutron/tests - -# Nova: -# Notes: vi /usr/lib/python2.7/site-packages/nova/tests/conf_fixture.py -# modify api-paste.ini reference to be /etc/nova/api-paste.ini, the conf -# file isn't being read properly, so some tests will fail to run - % cd / - % nosetests --verbose /usr/lib/python2.7/site-packages/nova/tests - -# keystone: -# - -# Other Notes: --------------- - - 1) testr: not so good, can be missing, some tools are. use nostests directly - instead. - 2) all run_tests.sh are provided, even though they are similar - - - - - - -- cgit v1.2.3-54-g00ecf