summaryrefslogtreecommitdiffstats
path: root/meta-openstack/Documentation
diff options
context:
space:
mode:
authorAndy Ning <andy.ning@windriver.com>2014-06-13 11:21:16 -0400
committerBruce Ashfield <bruce.ashfield@windriver.com>2014-06-18 15:08:11 -0400
commit9b966a64a3bbf50f4661d4d8adac2a56794db5cb (patch)
treee664e1f8f29bae43f32cc50857c727005eb12198 /meta-openstack/Documentation
parentb53f039deee13fe869aaceca27d4e30cd40efb48 (diff)
downloadmeta-cloud-services-9b966a64a3bbf50f4661d4d8adac2a56794db5cb.tar.gz
Add metadata service support to controller node
The metadata service is working as the following: - metadata is being served by nova-api on controller at port 8775. - VM instance requests metadata by 169.254.169.254 (eg, curl http://169.254.169.254/latest/meta-data) - metadata request comes to neutron-ns-metadata-proxy on controller in dhcp network name space. - neutron-ns-metadata-proxy forwards the request to neutron-metadata-agent through a unix domain socket (/var/lib/neutron/metadata_proxy). - neutron-metadata-agent sends the request to nova-api on port 8775 to be serviced. To support metadata service, neutron-ns-metadata-proxy is baked into the controller image. Also neutron-metadata-agent startup script (/etc/init.d/neutron-metadata-agent) and config file (/etc/neutron/metadata_agent.ini) are added to start up metadata agent at system initialization. dhcp_agent.ini and nova.conf are updated as well. A README.metadata is added in the Documentation/ directory. Signed-off-by: Andy Ning <andy.ning@windriver.com> Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Diffstat (limited to 'meta-openstack/Documentation')
-rw-r--r--meta-openstack/Documentation/README.metadata117
1 files changed, 117 insertions, 0 deletions
diff --git a/meta-openstack/Documentation/README.metadata b/meta-openstack/Documentation/README.metadata
new file mode 100644
index 0000000..46a7c2a
--- /dev/null
+++ b/meta-openstack/Documentation/README.metadata
@@ -0,0 +1,117 @@
1Summary
2=======
3
4This document is intended to provide an overview of what metadata service is,
5how it works and how it is tested to ensure that metadata service is working correctly.
6
7Metadata Service Introduction
8=============================
9
10OpenStack Compute service uses a special metadata service to enable VM instances to retrieve instance-specific data (metadata). Instances access the metadata service at http://169.254.169.254. The metadata service supports two sets of APIs: an OpenStack metadata API and an EC2-compatible API. Each of the APIs is versioned by date.
11
12To retrieve a list of supported versions for the OpenStack metadata API, make a GET request to http://169.254.169.254/openstack.
13
14For example:
15$ curl http://169.254.169.254/openstack
162012-08-10
17latest
18
19To list supported versions for the EC2-compatible metadata API, make a GET request to http://169.254.169.254.
20
21For example:
22$ curl http://169.254.169.254
231.0
242007-01-19
252007-03-01
262007-08-29
272007-10-10
282007-12-15
292008-02-01
302008-09-01
312009-04-04
32latest
33
34If cloud-init is supported by the VM image, cloud-init can retrieve metadata from metadata service at instance initialization.
35
36Metadata Service Implementation
37===============================
38
39Metadata service is provided by nova-api on controller at port 8775. VM instance requests metadata by 169.254.169.254
40(eg, curl http://169.254.169.254/latest/meta-data). The requests from VM come to neutron-ns-metadata-proxy on controller
41in dhcp network name space, neutron-ns-metadata-proxy forwards the requests to neutron-metadata-agent through a unix domain
42socket (/var/lib/neutron/metadata_proxy), and neutron-metadata-agent sends the request to nova-api on port 8775 to be serviced.
43
44Test Steps
45==========
461. build controller and compute image as normal.
472. setup a cloud with one controller and one compute on real hardware with a flat network.
48 - make sure controller and compute see each other by ping.
493. on controller:
50 - checking metadata agent is running:
51 # ps -ef | grep neutron-metadata-agent
52 - create a network
53 example:
54 # neutron net-create --provider:physical_network=ph-eth0 --provider:network_type=flat --router:external=True MY_NET
55 - create a subnet on the network just created
56 example:
57 # neutron subnet-create MY_NET 128.224.149.0/24 --name MY_SUBNET --no-gateway --host-route destination=0.0.0.0/0,nexthop=128.224.149.1 --allocation-pool start=128.224.149.200,end=128.224.149.210
58 - create an image from cirros 0.3.2 (0.3.0 doesn't work properly due to a bug in it)
59 example:
60 # glance image-create --name cirros-0.3.2 --is-public true --container-format bare --disk-format qcow2 --file cirros-0.3.2-x86_64-disk.img
61 - boot an instance from cirros-0.3.2 image
62 example:
63 # nova boot --image cirros-0.3.2 --flavor 1 OpenStack_1
64 - checking dhcp domain is created
65 # ip netns list
66 example output:
67qdhcp-229dd93f-a3da-4a21-be22-49c3f3a5dbbd
68
69 # ip netns exec qdhcp-229dd93f-a3da-4a21-be22-49c3f3a5dbbd ip addr
70 example output:
7116: tap5dfe0d76-c5: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
72 link/ether fa:16:3e:c5:d9:65 brd ff:ff:ff:ff:ff:ff
73 inet 128.224.149.201/24 brd 128.224.149.255 scope global tap5dfe0d76-c5
74 inet 169.254.169.254/16 brd 169.254.255.255 scope global tap5dfe0d76-c5
75 inet6 fe80::f816:3eff:fec5:d965/64 scope link
76 valid_lft forever preferred_lft forever
7717: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
78 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
79 inet 127.0.0.1/8 scope host lo
80 inet6 ::1/128 scope host
81 valid_lft forever preferred_lft forever
8218: sit0: <NOARP> mtu 1480 qdisc noop state DOWN
83 link/sit 0.0.0.0 brd 0.0.0.0
84
85 # ip netns exec qdhcp-229dd93f-a3da-4a21-be22-49c3f3a5dbbd netstat -anpe
86 - ensure 0.0.0.0:80 is in there
87 example output:
88Active Internet connections (servers and established)
89Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
90tcp 0 0 128.224.149.201:53 0.0.0.0:* LISTEN 0 159928 8508/dnsmasq
91tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 0 159926 8508/dnsmasq
92tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 164930 8522/python
93tcp6 0 0 fe80::f816:3eff:fec5:53 :::* LISTEN 65534 161016 8508/dnsmasq
94udp 0 0 128.224.149.201:53 0.0.0.0:* 0 159927 8508/dnsmasq
95udp 0 0 169.254.169.254:53 0.0.0.0:* 0 159925 8508/dnsmasq
96udp 0 0 0.0.0.0:67 0.0.0.0:* 0 159918 8508/dnsmasq
97udp6 0 0 fe80::f816:3eff:fec5:53 :::* 65534 161015 8508/dnsmasq
98Active UNIX domain sockets (servers and established)
99Proto RefCnt Flags Type State I-Node PID/Program name Path
100unix 2 [ ] DGRAM 37016 8508/dnsmasq
101
1024. on VM instance:
103 - check instance log, ensure the instance gets a dhcp IP address and a static route, as well as the instance-id
104 - login to the instance, and do the following test
105 $ hostname
106 the host name should be the name specified in "nova boot" when instance is created.
107 $ ifconfig
108 it should have a valid IP on eth0 in the range specified in "neutron subnet-create" when subnet is created.
109 $ route
110 there should be an entry for "169.254.169.254 x.x.x.x 255.255.255.255 eth0"
111 $ curl http://169.254.169.254/latest/meta-data
112 it should return a list of metadata (hostname, instance-id, etc).
113 $ nova reboot <instance>, nova stop <instance>, nova start <instance>, nova rebuild <instance> <image>
114 metadata should be working
115 $ nova boot --user-data <userdata.txt> --image <image> --flavor 1 <instance>
116 curl http://169.254.169.254/latest/user-data should retrieve the userdata.txt
117