diff options
| author | Amy Fong <amy.fong@windriver.com> | 2014-05-21 14:35:15 -0400 |
|---|---|---|
| committer | Bruce Ashfield <bruce.ashfield@windriver.com> | 2014-05-23 23:42:55 -0400 |
| commit | fb1d6f23fa01c0217ed3f6778d8033dd0030db2a (patch) | |
| tree | 36dc89d6b66050a56cbca2f2f7c90229ebcb8854 /meta-openstack/Documentation | |
| parent | 6350b155270f7f086624db36ecc6e6008ebcd378 (diff) | |
| download | meta-cloud-services-fb1d6f23fa01c0217ed3f6778d8033dd0030db2a.tar.gz | |
Testing documentation
Add documentation for testing swift, ceph, heat.
Create a script and instructions on a script that launches a controller
and a specified number of compute nodes.
Signed-off-by: Amy Fong <amy.fong@windriver.com>
Diffstat (limited to 'meta-openstack/Documentation')
| -rw-r--r-- | meta-openstack/Documentation/README.ceph-openstack | 592 | ||||
| -rw-r--r-- | meta-openstack/Documentation/README.heat | 549 | ||||
| -rw-r--r-- | meta-openstack/Documentation/README.swift | 444 | ||||
| -rw-r--r-- | meta-openstack/Documentation/testsystem/README | 116 | ||||
| -rw-r--r-- | meta-openstack/Documentation/testsystem/README.multi-compute | 150 | ||||
| -rw-r--r-- | meta-openstack/Documentation/testsystem/README.tests | 9 | ||||
| -rwxr-xr-x | meta-openstack/Documentation/testsystem/launch.py | 304 | ||||
| -rw-r--r-- | meta-openstack/Documentation/testsystem/sample.cfg | 15 |
8 files changed, 2179 insertions, 0 deletions
diff --git a/meta-openstack/Documentation/README.ceph-openstack b/meta-openstack/Documentation/README.ceph-openstack new file mode 100644 index 0000000..8f11f2d --- /dev/null +++ b/meta-openstack/Documentation/README.ceph-openstack | |||
| @@ -0,0 +1,592 @@ | |||
| 1 | Summary | ||
| 2 | ======= | ||
| 3 | |||
| 4 | This document is not intended to provide detail of how Ceph in general works | ||
| 5 | (please refer to addons/wr-ovp/layers/ovp/Documentation/README_ceph.pdf | ||
| 6 | document for such a detail), but rather, it highlights the details of how | ||
| 7 | Ceph cluster is setup and OpenStack is configured to allow various Openstack | ||
| 8 | components interact with Ceph. | ||
| 9 | |||
| 10 | |||
| 11 | Ceph Cluster Setup | ||
| 12 | ================== | ||
| 13 | |||
| 14 | By default Ceph cluster is setup to have the followings: | ||
| 15 | |||
| 16 | * Ceph monitor daemon running on Controller node | ||
| 17 | * Two Ceph OSD (osd.0 and osd.1) daemons running on Controller node. | ||
| 18 | The underneath block devices for these 2 OSDs are loopback block devices. | ||
| 19 | The size of the backing up loopback files is 10Gbytes by default and can | ||
| 20 | be changed at compile time through variable CEPH_BACKING_FILE_SIZE. | ||
| 21 | * No Ceph MSD support | ||
| 22 | * Cephx authentication is enabled | ||
| 23 | |||
| 24 | This is done through script /etc/init.d/ceph-setup. This script is | ||
| 25 | run when system is booting up. Therefore, Ceph cluster should ready | ||
| 26 | for use after booting, and no additional manual step is required. | ||
| 27 | |||
| 28 | With current Ceph setup, only Controller node is able to run Ceph | ||
| 29 | commands which requires Ceph admin installed (file /etc/ceph/ceph.client.admin.keyring | ||
| 30 | exists). If it's desired to have node other than Controller (e.g. Compute node) | ||
| 31 | to be able to run Ceph command, then keyring for at a particular Ceph client | ||
| 32 | must be created and transfered from Controller node to that node. There is a | ||
| 33 | convenient tool for doing so in a secure manner. On Controller node, run: | ||
| 34 | |||
| 35 | $ /etc/ceph/ceph_xfer_keyring.sh -h | ||
| 36 | $ /etc/ceph/ceph_xfer_keyring.sh <key name> <remote login> [remote location] | ||
| 37 | |||
| 38 | The way Ceph cluster is setup is mainly for demonstration purpose. One might | ||
| 39 | wants to have a different Ceph cluster setup than this setup (e.g. using real | ||
| 40 | hardware block devivce instead of loopback devices). | ||
| 41 | |||
| 42 | |||
| 43 | Setup Ceph's Pool and Client Users To Be Used By OpenStack | ||
| 44 | ========================================================== | ||
| 45 | |||
| 46 | After Ceph cluster is up and running, some specific Ceph pools and | ||
| 47 | Ceph client users must be created in order for Openstack components | ||
| 48 | to be able to use Ceph. | ||
| 49 | |||
| 50 | * Openstack cinder-volume component requires "cinder-volumes" pool | ||
| 51 | and "cinder-volume" client exist. | ||
| 52 | * Openstack cinder-backups component requires "cinder-backups" pool | ||
| 53 | and "cinder-backup" client exist | ||
| 54 | * Openstack Glance component requires "images" pool and "glance" | ||
| 55 | client exist | ||
| 56 | * Openstack nova-compute component required "cinder-volumes" pool | ||
| 57 | and "cinder-volume" client exist. | ||
| 58 | |||
| 59 | After system is booted up, all of these required pools and clients | ||
| 60 | are created automatically through script /etc/ceph/ceph-openstack-setup.sh. | ||
| 61 | |||
| 62 | |||
| 63 | Cinder-volume and Ceph | ||
| 64 | ====================== | ||
| 65 | |||
| 66 | Cinder-volume supports multiple backends including Ceph Rbd. When a volume | ||
| 67 | is created with "--volume_type cephrbd" | ||
| 68 | |||
| 69 | $ cinder create --volume_type cephrbd --display_name glusterfs_vol_1 1 | ||
| 70 | |||
| 71 | where "cephrbd" type can be created as following: | ||
| 72 | |||
| 73 | $ cinder type-create cephrbd | ||
| 74 | $ cinder type-key cephrbd set volume_backend_name=RBD_CEPH | ||
| 75 | |||
| 76 | then Cinder-volume Ceph backend driver will store volume into Ceph's pool | ||
| 77 | named "cinder-volumes". | ||
| 78 | |||
| 79 | On controller node, to list what is in "cinder-volumes" pool: | ||
| 80 | |||
| 81 | $ rbd -p cinder-volumes ls | ||
| 82 | volume-b5294a0b-5c92-4b2f-807e-f49c5bc1896b | ||
| 83 | |||
| 84 | The following configuration options in /etc/cinder/cinder.conf | ||
| 85 | affect on how cinder-volume interact with Ceph cluster through | ||
| 86 | cinder-volume ceph backend | ||
| 87 | |||
| 88 | volume_driver=cinder.volume.drivers.rbd.RBDDriver | ||
| 89 | rbd_pool=cinder-volumes | ||
| 90 | rbd_ceph_conf=/etc/ceph/ceph.conf | ||
| 91 | rbd_flatten_volume_from_snapshot=false | ||
| 92 | rbd_max_clone_depth=5 | ||
| 93 | rbd_user=cinder-volume | ||
| 94 | volume_backend_name=RBD_CEPH | ||
| 95 | |||
| 96 | |||
| 97 | Cinder-backup and Ceph | ||
| 98 | ====================== | ||
| 99 | |||
| 100 | Cinder-backup has ability to store volume backup into Ceph | ||
| 101 | "volume-backups" pool with the following command: | ||
| 102 | |||
| 103 | $ cinder backup-create <cinder volume ID> | ||
| 104 | |||
| 105 | where <cinder volume ID> is ID of an existing Cinder volume. | ||
| 106 | |||
| 107 | Cinder-backup is not be able to create a backup for any cinder | ||
| 108 | volume which backed by NFS or Glusterfs. This is because NFS | ||
| 109 | and Gluster cinder-volume backend drivers do not support the | ||
| 110 | backup functionality. In other words, only cinder volume | ||
| 111 | backed by lvm-iscsi and ceph-rbd are able to be backed-up | ||
| 112 | by cinder-backup. | ||
| 113 | |||
| 114 | On controller node, to list what is in "cinder-backups" pool: | ||
| 115 | |||
| 116 | $ rbd -p "cinder-backups" ls | ||
| 117 | |||
| 118 | The following configuration options in /etc/cinder/cinder.conf affect | ||
| 119 | on how cinder-backup interacts with Ceph cluster: | ||
| 120 | |||
| 121 | backup_driver=cinder.backup.drivers.ceph | ||
| 122 | backup_ceph_conf=/etc/ceph/ceph.conf | ||
| 123 | backup_ceph_user=cinder-backup | ||
| 124 | backup_ceph_chunk_size=134217728 | ||
| 125 | backup_ceph_pool=cinder-backups | ||
| 126 | backup_ceph_stripe_unit=0 | ||
| 127 | backup_ceph_stripe_count=0 | ||
| 128 | restore_discard_excess_bytes=true | ||
| 129 | |||
| 130 | |||
| 131 | Glance and Ceph | ||
| 132 | =============== | ||
| 133 | |||
| 134 | Glance can store images into Ceph pool "images" when "default_store = rbd" | ||
| 135 | is set in /etc/glance/glance-api.conf. | ||
| 136 | |||
| 137 | By default "default_store" has value of "file" which tells Glance to | ||
| 138 | store images into local filesystem. "default_store" value can be set | ||
| 139 | during compile time through variable GLANCE_DEFAULT_STORE. | ||
| 140 | |||
| 141 | The following configuration options in /etc/glance/glance-api.conf affect | ||
| 142 | on how glance interacts with Ceph cluster: | ||
| 143 | |||
| 144 | default_store = rbd | ||
| 145 | rbd_store_ceph_conf = /etc/ceph/ceph.conf | ||
| 146 | rbd_store_user = glance | ||
| 147 | rbd_store_pool = images | ||
| 148 | rbd_store_chunk_size = 8 | ||
| 149 | |||
| 150 | |||
| 151 | Nova-compute and Ceph | ||
| 152 | ===================== | ||
| 153 | |||
| 154 | On Controller node, when a VM is booted with command: | ||
| 155 | |||
| 156 | $ nova boot --image <glance image ID> ... | ||
| 157 | |||
| 158 | then on Compute note, if "libvirt_images_type = default" (in /etc/nova/nova.conf), | ||
| 159 | nova-compute will download the specified glance image from Controller node and | ||
| 160 | stores it locally (on Compute node). If "libvirt_images_type = rbd" then | ||
| 161 | nova-compute will import the specified glance image into "cinder-volumes" Ceph pool. | ||
| 162 | |||
| 163 | By default, "libvirt_images_type" has value of "default" and it can be changed during | ||
| 164 | compile time through variable LIBVIRT_IMAGES_TYPE. | ||
| 165 | |||
| 166 | nova-compute underneath uses libvirt to spawn VMs. If Ceph cinder volume is provided | ||
| 167 | while booting VM with option "--block-device <options>", then a libvirt secret must be | ||
| 168 | provided nova-compute to allow libvirt to authenticate with Cephx before libvirt is able | ||
| 169 | to mount Ceph block device. This libvirt secret is provided through "rbd_secret_uuid" | ||
| 170 | option in /etc/nova/nova.conf. | ||
| 171 | |||
| 172 | Therefore, on Compute node, if "libvirt_images_type = rbd" then the followings | ||
| 173 | are required: | ||
| 174 | |||
| 175 | * /etc/ceph/ceph.client.cinder-volume.keyring exist. This file contains | ||
| 176 | ceph client.cinder-volume's key, so that nova-compute can run some | ||
| 177 | restricted Ceph command allowed for cinder-volume Ceph client. For example: | ||
| 178 | |||
| 179 | $ rbd -p cinder-backups ls --id cinder-volume | ||
| 180 | |||
| 181 | should fail as "cinder-volume" Ceph client has no permission to touch | ||
| 182 | "cinder-backups" ceph pool. And the following should work: | ||
| 183 | |||
| 184 | $ rbd -p cinder-volumes ls --id cinder-volume | ||
| 185 | |||
| 186 | * Also, there must be existing a libvirt secret which stores Ceph | ||
| 187 | client.cinder-volume's key. | ||
| 188 | |||
| 189 | Right now, due to security and the booting order of Controller and Compute nodes, | ||
| 190 | these 2 requirements are not automatically satisfied at the boot time. | ||
| 191 | |||
| 192 | A script (/etc/ceph/set_nova_compute_cephx.sh) is provided to ease the task of | ||
| 193 | transferring ceph.client.cinder-volume.keyring from Controller node to Compute | ||
| 194 | node, and to create libvirt secret. On Controller node, manually runs (just one time): | ||
| 195 | |||
| 196 | $ /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute | ||
| 197 | |||
| 198 | The following configuration options in /etc/glance/glance-api.conf affect | ||
| 199 | on how nova-compute interacts with Ceph cluster: | ||
| 200 | |||
| 201 | libvirt_images_type = rbd | ||
| 202 | libvirt_images_rbd_pool=cinder-volumes | ||
| 203 | libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf | ||
| 204 | rbd_user=cinder-volume | ||
| 205 | rbd_secret_uuid=<libvirt secret UUID> | ||
| 206 | |||
| 207 | |||
| 208 | Ceph High Availability | ||
| 209 | ====================== | ||
| 210 | |||
| 211 | Ceph, by design, has strong high availability feature. Each Ceph object | ||
| 212 | can be replicated and stored into multiple independent physical disk | ||
| 213 | storages (controlled by Ceph OSD daemons) which are either in the same | ||
| 214 | machine or in separated machines. | ||
| 215 | |||
| 216 | The number of replication is configurable. In general, the higher the | ||
| 217 | number of replication, the higher Ceph availability, however, the down | ||
| 218 | side is the higher physical disk storage space required. | ||
| 219 | |||
| 220 | Also in general, each Ceph object replication should be stored in | ||
| 221 | different machines so that 1 machine goes down, the other replications | ||
| 222 | are still available. | ||
| 223 | |||
| 224 | Openstack default Ceph cluster is configured to have 2 replications. | ||
| 225 | However, these 2 replications are stored into the same machine (which | ||
| 226 | is Controller node). | ||
| 227 | |||
| 228 | |||
| 229 | Build Configuration Options | ||
| 230 | =========================== | ||
| 231 | |||
| 232 | * Controller build config options: | ||
| 233 | |||
| 234 | --enable-board=intel-xeon-core \ | ||
| 235 | --enable-rootfs=ovp-openstack-controller \ | ||
| 236 | --enable-kernel=preempt-rt \ | ||
| 237 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 238 | --with-template=feature/openstack-tests \ | ||
| 239 | --enable-unsupported-config=yes | ||
| 240 | |||
| 241 | * Compute build config options: | ||
| 242 | |||
| 243 | --enable-board=intel-xeon-core \ | ||
| 244 | --enable-rootfs=ovp-openstack-compute \ | ||
| 245 | --enable-kernel=preempt-rt \ | ||
| 246 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 247 | --enable-unsupported-config=yes | ||
| 248 | |||
| 249 | |||
| 250 | Testing Commands and Expected Results | ||
| 251 | ===================================== | ||
| 252 | |||
| 253 | This section describes test steps and expected results to demonstrate that | ||
| 254 | Ceph is integrated properly into OpenStack | ||
| 255 | |||
| 256 | Please note: the following commands are carried on Controller node, unless | ||
| 257 | otherwise explicitly indicated. | ||
| 258 | |||
| 259 | $ Start Controller and Compute node in hardware targets | ||
| 260 | |||
| 261 | $ ps aux | grep ceph | ||
| 262 | |||
| 263 | root 2986 0.0 0.0 1059856 22320 ? Sl 02:50 0:08 /usr/bin/ceph-mon -i controller --pid-file /var/run/ceph/mon.controller.pid -c /etc/ceph/ceph.conf | ||
| 264 | root 3410 0.0 0.2 3578292 153144 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf | ||
| 265 | root 3808 0.0 0.0 3289468 34428 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c /etc/ceph/ceph.conf | ||
| 266 | |||
| 267 | $ ceph osd lspools | ||
| 268 | |||
| 269 | 0 data,1 metadata,2 rbd,3 cinder-volumes,4 cinder-backups,5 images, | ||
| 270 | |||
| 271 | $ neutron net-create mynetwork | ||
| 272 | $ neutron net-list | ||
| 273 | |||
| 274 | +--------------------------------------+-----------+---------+ | ||
| 275 | | id | name | subnets | | ||
| 276 | +--------------------------------------+-----------+---------+ | ||
| 277 | | 15157fda-0940-4eba-853d-52338ace3362 | mynetwork | | | ||
| 278 | +--------------------------------------+-----------+---------+ | ||
| 279 | |||
| 280 | $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 281 | $ nova boot --image myfirstimage --flavor 1 myinstance | ||
| 282 | $ nova list | ||
| 283 | |||
| 284 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 285 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 286 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 287 | | 26c2af98-dc78-465b-a6c2-bb52188d2b42 | myinstance | ACTIVE | - | Running | | | ||
| 288 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 289 | |||
| 290 | $ nova delete 26c2af98-dc78-465b-a6c2-bb52188d2b42 | ||
| 291 | |||
| 292 | +----+------+--------+------------+-------------+----------+ | ||
| 293 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 294 | +----+------+--------+------------+-------------+----------+ | ||
| 295 | +----+------+--------+------------+-------------+----------+ | ||
| 296 | |||
| 297 | $ Modify /etc/glance/glance-api.conf, | ||
| 298 | to change "default_store = file" to "default_store = rbd", | ||
| 299 | $ /etc/init.d/glance-api restart | ||
| 300 | |||
| 301 | $ /etc/cinder/add-cinder-volume-types.sh | ||
| 302 | $ cinder extra-specs-list | ||
| 303 | |||
| 304 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 305 | | ID | Name | extra_specs | | ||
| 306 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 307 | | 4cb4ae4a-600a-45fb-9332-aa72371c5985 | lvm_iscsi | {u'volume_backend_name': u'LVM_iSCSI'} | | ||
| 308 | | 83b3ee5f-a6f6-4fea-aeef-815169ee83b9 | glusterfs | {u'volume_backend_name': u'GlusterFS'} | | ||
| 309 | | c1570914-a53a-44e4-8654-fbd960130b8e | cephrbd | {u'volume_backend_name': u'RBD_CEPH'} | | ||
| 310 | | d38811d4-741a-4a68-afe3-fb5892160d7c | nfs | {u'volume_backend_name': u'Generic_NFS'} | | ||
| 311 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 312 | |||
| 313 | $ glance image-create --name mysecondimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 314 | $ glance image-list | ||
| 315 | |||
| 316 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 317 | | ID | Name | Disk Format | Container Format | Size | Status | | ||
| 318 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 319 | | bec1580e-2475-4d1d-8d02-cca53732d17b | myfirstimage | qcow2 | bare | 9761280 | active | | ||
| 320 | | a223e5f7-a4b5-4239-96ed-a242db2a150a | mysecondimage | qcow2 | bare | 9761280 | active | | ||
| 321 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 322 | |||
| 323 | $ rbd -p images ls | ||
| 324 | |||
| 325 | a223e5f7-a4b5-4239-96ed-a242db2a150a | ||
| 326 | |||
| 327 | $ cinder create --volume_type lvm_iscsi --image-id a223e5f7-a4b5-4239-96ed-a242db2a150a --display_name=lvm_vol_2 1 | ||
| 328 | $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 | ||
| 329 | $ cinder create --volume_type nfs --display_name nfs_vol_1 1 | ||
| 330 | $ cinder create --volume_type glusterfs --display_name glusterfs_vol_1 1 | ||
| 331 | $ cinder create --volume_type cephrbd --display_name cephrbd_vol_1 1 | ||
| 332 | $ cinder list | ||
| 333 | |||
| 334 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 335 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 336 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 337 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 338 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 339 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 340 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 341 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 342 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 343 | |||
| 344 | $ rbd -p cinder-volumes ls | ||
| 345 | |||
| 346 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 347 | |||
| 348 | (This uuid matches with the one in cinder list above) | ||
| 349 | |||
| 350 | $ cinder backup-create e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | ||
| 351 | (create a backup for lvm-iscsi volume) | ||
| 352 | |||
| 353 | $ cinder backup-create cea76733-b4ce-4e9a-9bfb-24cc3066070f | ||
| 354 | (create a backup for nfs volume, this should fails, as nfs volume | ||
| 355 | does not support volume backup) | ||
| 356 | |||
| 357 | $ cinder backup-create c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 358 | (create a backup for ceph volume) | ||
| 359 | |||
| 360 | $ cinder backup-create b0805546-be7a-4908-b1d5-21202fe6ea79 | ||
| 361 | (create a backup for gluster volume, this should fails, as glusterfs volume | ||
| 362 | does not support volume backup) | ||
| 363 | |||
| 364 | $ cinder backup-list | ||
| 365 | |||
| 366 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 367 | | ID | Volume ID | Status | Name | Size | Object Count | Container | | ||
| 368 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 369 | | 287502a0-aa4d-4065-93e0-f72fd5c239f5 | cea76733-b4ce-4e9a-9bfb-24cc3066070f | error | None | 1 | None | None | | ||
| 370 | | 2b0ca8a7-a827-4f1c-99d5-4fb7d9f25b5c | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | None | 1 | None | cinder-backups | | ||
| 371 | | 32d10c06-a742-45d6-9e13-777767ff5545 | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | None | 1 | None | cinder-backups | | ||
| 372 | | e2bdf21c-d378-49b3-b5e3-b398964b925c | b0805546-be7a-4908-b1d5-21202fe6ea79 | error | None | 1 | None | None | | ||
| 373 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 374 | |||
| 375 | $ rbd -p cinder-backups ls | ||
| 376 | |||
| 377 | volume-0c3f82ea-b3df-414e-b054-7a4977b7e354.backup.94358fed-6bd9-48f1-b67a-4d2332311a1f | ||
| 378 | volume-219a3250-50b4-4db0-9a6c-55e53041b65e.backup.base | ||
| 379 | |||
| 380 | (There should be only 2 backup volumes in the ceph cinder-backups pool) | ||
| 381 | |||
| 382 | $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls | ||
| 383 | |||
| 384 | 2014-03-17 13:03:54.617373 7f8673602780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication | ||
| 385 | 2014-03-17 13:03:54.617378 7f8673602780 0 librados: client.admin initialization error (2) No such file or directory | ||
| 386 | rbd: couldn't connect to the cluster! | ||
| 387 | |||
| 388 | (This should fails as compute node does not have ceph cinder-volume keyring yet) | ||
| 389 | |||
| 390 | $ /bin/bash /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute | ||
| 391 | |||
| 392 | The authenticity of host 'compute (128.224.149.169)' can't be established. | ||
| 393 | ECDSA key fingerprint is 6a:79:95:fa:d6:56:0d:72:bf:5e:cb:59:e0:64:f6:7a. | ||
| 394 | Are you sure you want to continue connecting (yes/no)? yes | ||
| 395 | Warning: Permanently added 'compute,128.224.149.169' (ECDSA) to the list of known hosts. | ||
| 396 | root@compute's password: | ||
| 397 | Run virsh secret-define: | ||
| 398 | Secret 96dfc68f-3528-4bd0-a226-17a0848b05da created | ||
| 399 | |||
| 400 | Run virsh secret-set-value: | ||
| 401 | Secret value set | ||
| 402 | |||
| 403 | $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls | ||
| 404 | |||
| 405 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 406 | |||
| 407 | $ On Compute node: to allow nova-compute to save glance image into | ||
| 408 | ceph (by default it saves at the local filesystem /etc/nova/instances) | ||
| 409 | |||
| 410 | modify /etc/nova/nova.conf to change: | ||
| 411 | |||
| 412 | libvirt_images_type = default | ||
| 413 | |||
| 414 | into | ||
| 415 | |||
| 416 | libvirt_images_type = rbd | ||
| 417 | |||
| 418 | $ On Compute node: /etc/init.d/nova-compute restart | ||
| 419 | |||
| 420 | $ nova boot --flavor 1 \ | ||
| 421 | --image mysecondimage \ | ||
| 422 | --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ | ||
| 423 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 424 | --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ | ||
| 425 | --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ | ||
| 426 | myinstance | ||
| 427 | |||
| 428 | $ rbd -p cinder-volumes ls | ||
| 429 | |||
| 430 | instance-00000002_disk | ||
| 431 | volume-219a3250-50b4-4db0-9a6c-55e53041b65e | ||
| 432 | |||
| 433 | (We should see instance-000000xx_disk ceph object) | ||
| 434 | |||
| 435 | $ nova list | ||
| 436 | |||
| 437 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 438 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 439 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 440 | | 2a6aeff9-5a35-45a1-b8c4-0730df2a767a | myinstance | ACTIVE | - | Running | | | ||
| 441 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 442 | |||
| 443 | $ From dashboard, log into VM console run "cat /proc/partitions" | ||
| 444 | |||
| 445 | Should be able to login and see vdb, vdc, vdd, vdde 1G block devices | ||
| 446 | |||
| 447 | $ nova delete 2a6aeff9-5a35-45a1-b8c4-0730df2a767a | ||
| 448 | |||
| 449 | $ nova list | ||
| 450 | |||
| 451 | +----+------+--------+------------+-------------+----------+ | ||
| 452 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 453 | +----+------+--------+------------+-------------+----------+ | ||
| 454 | +----+------+--------+------------+-------------+----------+ | ||
| 455 | |||
| 456 | $ rbd -p cinder-volumes ls | ||
| 457 | |||
| 458 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 459 | |||
| 460 | (The instance instance-00000010_disk should be gone) | ||
| 461 | |||
| 462 | $ nova boot --flavor 1 \ | ||
| 463 | --image mysecondimage \ | ||
| 464 | --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ | ||
| 465 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 466 | --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ | ||
| 467 | --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ | ||
| 468 | myinstance | ||
| 469 | |||
| 470 | $ nova list | ||
| 471 | |||
| 472 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 473 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 474 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 475 | | c1866b5f-f731-4d9c-b855-7f82f3fb314f | myinstance | ACTIVE | - | Running | | | ||
| 476 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 477 | |||
| 478 | $ From dashboard, log into VM console run "cat /proc/partitions" | ||
| 479 | |||
| 480 | Should be able to login and see vdb, vdc, vdd, vdde 1G block devices | ||
| 481 | |||
| 482 | $ nova delete c1866b5f-f731-4d9c-b855-7f82f3fb314f | ||
| 483 | $ nova list | ||
| 484 | |||
| 485 | +----+------+--------+------------+-------------+----------+ | ||
| 486 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 487 | +----+------+--------+------------+-------------+----------+ | ||
| 488 | +----+------+--------+------------+-------------+----------+ | ||
| 489 | |||
| 490 | $ cinder list | ||
| 491 | |||
| 492 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 493 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 494 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 495 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 496 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 497 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 498 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 499 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 500 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 501 | |||
| 502 | (All the volume should be available) | ||
| 503 | |||
| 504 | $ ceph -s | ||
| 505 | |||
| 506 | cluster 9afd3ca8-50e0-4f71-9fc0-e9034d14adf0 | ||
| 507 | health HEALTH_OK | ||
| 508 | monmap e1: 1 mons at {controller=128.224.149.168:6789/0}, election epoch 2, quorum 0 controller | ||
| 509 | osdmap e22: 2 osds: 2 up, 2 in | ||
| 510 | pgmap v92: 342 pgs, 6 pools, 9532 kB data, 8 objects | ||
| 511 | 2143 MB used, 18316 MB / 20460 MB avail | ||
| 512 | 342 active+clean | ||
| 513 | |||
| 514 | (Should see "health HEALTH_OK" which indicates Ceph cluster is all good) | ||
| 515 | |||
| 516 | $ nova boot --flavor 1 \ | ||
| 517 | --image myfirstimage \ | ||
| 518 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 519 | myinstance | ||
| 520 | |||
| 521 | (Booting VM with only existing CephRBD Cinder volume as block device) | ||
| 522 | |||
| 523 | $ nova list | ||
| 524 | |||
| 525 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 526 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 527 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 528 | | 4e984fd0-a0af-435f-84a1-ecd6b24b7256 | myinstance | ACTIVE | - | Running | | | ||
| 529 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 530 | |||
| 531 | $ From dashboard, log into VM console. Assume that the second partition (CephRbd) | ||
| 532 | is /dev/vdb | ||
| 533 | $ On VM, run: "sudo mkfs.ext4 /dev/vdb && sudo mkdir ceph && sudo mount /dev/vdb ceph && sudo chmod 777 -R ceph" | ||
| 534 | $ On VM, run: "echo "Hello World" > ceph/test.log && dd if=/dev/urandom of=ceph/512M bs=1M count=512 && sync" | ||
| 535 | $ On VM, run: "cat ceph/test.log && sudo umount ceph" | ||
| 536 | |||
| 537 | Hello World | ||
| 538 | |||
| 539 | $ /etc/init.d/ceph stop osd.0 | ||
| 540 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_1.log" | ||
| 541 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 542 | |||
| 543 | Hello World | ||
| 544 | Hello World | ||
| 545 | |||
| 546 | $ /etc/init.d/ceph start osd.0 | ||
| 547 | $ Wait until "ceph -s" shows "health HEALTH_OK" | ||
| 548 | $ /etc/init.d/ceph stop osd.1 | ||
| 549 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_2.log" | ||
| 550 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 551 | |||
| 552 | Hello World | ||
| 553 | Hello World | ||
| 554 | Hello World | ||
| 555 | |||
| 556 | $ /etc/init.d/ceph stop osd.0 | ||
| 557 | (Both Ceph OSD daemons are down, so no Ceph Cinder volume available) | ||
| 558 | |||
| 559 | $ On VM, run "sudo mount /dev/vdb ceph" | ||
| 560 | (Stuck mounting forever, as Ceph Cinder volume is not available) | ||
| 561 | |||
| 562 | $ /etc/init.d/ceph start osd.0 | ||
| 563 | $ /etc/init.d/ceph start osd.1 | ||
| 564 | $ On VM, the previous mount should pass | ||
| 565 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_3.log" | ||
| 566 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 567 | |||
| 568 | Hello World | ||
| 569 | Hello World | ||
| 570 | Hello World | ||
| 571 | Hello World | ||
| 572 | |||
| 573 | $ nova delete 4e984fd0-a0af-435f-84a1-ecd6b24b7256 | ||
| 574 | $ cinder list | ||
| 575 | |||
| 576 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 577 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 578 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 579 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 580 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 581 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 582 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 583 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 584 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 585 | |||
| 586 | (All the volume should be available) | ||
| 587 | |||
| 588 | |||
| 589 | Additional References | ||
| 590 | ===================== | ||
| 591 | |||
| 592 | * https://ceph.com/docs/master/rbd/rbd-openstack/ | ||
diff --git a/meta-openstack/Documentation/README.heat b/meta-openstack/Documentation/README.heat new file mode 100644 index 0000000..276ad23 --- /dev/null +++ b/meta-openstack/Documentation/README.heat | |||
| @@ -0,0 +1,549 @@ | |||
| 1 | Summary | ||
| 2 | ======= | ||
| 3 | |||
| 4 | This document is not intended to provide detail of how Heat in general | ||
| 5 | works, but rather it describes how Heat is tested to ensure that Heat | ||
| 6 | is working correctly. | ||
| 7 | |||
| 8 | |||
| 9 | Heat Overview | ||
| 10 | ============= | ||
| 11 | |||
| 12 | Heat is template-driven orchestration engine which enables you to orchestrate | ||
| 13 | multiple composite cloud applications. Heat stack is a set of resources | ||
| 14 | (including nova compute VMs, cinder volumes, neutron IPs...) which are described | ||
| 15 | by heat orchestration template (a.k.a HOT) file. | ||
| 16 | |||
| 17 | Heat interacts with Ceilometer to provide autoscaling feature in which when | ||
| 18 | resources within a stack reach certain watermark (e.g. cpu utilization hits 70%) | ||
| 19 | then Heat can add or remove resources into or out of that stack to handle | ||
| 20 | the changing demands. | ||
| 21 | |||
| 22 | |||
| 23 | Build Configuration Options | ||
| 24 | =========================== | ||
| 25 | |||
| 26 | * Controller build config options: | ||
| 27 | |||
| 28 | --enable-board=intel-xeon-core \ | ||
| 29 | --enable-rootfs=ovp-openstack-controller \ | ||
| 30 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 31 | --with-template=feature/openstack-tests | ||
| 32 | |||
| 33 | * Compute build config options: | ||
| 34 | |||
| 35 | --enable-board=intel-xeon-core \ | ||
| 36 | --enable-rootfs=ovp-openstack-compute \ | ||
| 37 | --enable-addons=wr-ovp-openstack,wr-ovp | ||
| 38 | |||
| 39 | |||
| 40 | Test Steps | ||
| 41 | ========== | ||
| 42 | |||
| 43 | Please note: the following commands/steps are carried on Controller | ||
| 44 | node, unless otherwise explicitly indicated. | ||
| 45 | |||
| 46 | $ Start Controller and Compute node | ||
| 47 | $ . /etc/nova/openrc | ||
| 48 | $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 49 | $ glance image-list | ||
| 50 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 51 | | ID | Name | Disk Format | Container Format | Size | Status | | ||
| 52 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 53 | | a71b84d3-e656-43f5-afdc-ac67cf4f398a | myfirstimage | qcow2 | bare | 9761280 | active | | ||
| 54 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 55 | |||
| 56 | $ neutron net-create mynetwork | ||
| 57 | $ neutron net-list | ||
| 58 | +--------------------------------------+-----------+---------+ | ||
| 59 | | id | name | subnets | | ||
| 60 | +--------------------------------------+-----------+---------+ | ||
| 61 | | 8c9c8a6f-d90f-479f-82b0-7f6e963b39d7 | mynetwork | | | ||
| 62 | +--------------------------------------+-----------+---------+ | ||
| 63 | |||
| 64 | $ /etc/cinder/add-cinder-volume-types.sh | ||
| 65 | $ cinder create --volume_type nfs --display_name nfs_vol_1 3 | ||
| 66 | $ cinder create --volume_type glusterfs --display_name glusterfs_vol_1 2 | ||
| 67 | $ cinder list | ||
| 68 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 69 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 70 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 71 | | 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | | ||
| 72 | | c46276fb-e3df-4093-b759-2ce03c4cefd5 | available | glusterfs_vol_1 | 2 | glusterfs | false | | | ||
| 73 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 74 | |||
| 75 | *** The below test steps are for testing lifecycle management *** | ||
| 76 | |||
| 77 | $ heat stack-create -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=c46276fb-e3df-4093-b759-2ce03c4cefd5" mystack | ||
| 78 | $ Keep running "heat stack-list" until seeing | ||
| 79 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 80 | | id | stack_name | stack_status | creation_time | | ||
| 81 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 82 | | 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | CREATE_COMPLETE | 2014-05-08T16:36:22Z | | ||
| 83 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 84 | |||
| 85 | $ nova list | ||
| 86 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 87 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 88 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 89 | | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | | ||
| 90 | | fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | | ||
| 91 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 92 | |||
| 93 | (Should see 2 VMs: vm_1 and vm_2 as defined in two_vms_example.template) | ||
| 94 | |||
| 95 | $ cinder list | ||
| 96 | +--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 97 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 98 | +--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 99 | | 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | | ||
| 100 | | 58b55eb9-e619-4c2e-bfd8-ebc349aff02b | in-use | mystack-vol_1-5xk2zotoxidr | 1 | lvm_iscsi | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | | ||
| 101 | | c46276fb-e3df-4093-b759-2ce03c4cefd5 | in-use | glusterfs_vol_1 | 2 | glusterfs | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | | ||
| 102 | +--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 103 | |||
| 104 | (2 Cinder volumes are attached to vm_1) | ||
| 105 | |||
| 106 | $ From Dashboard, log into vm_1, and run "cat /proc/partitions" | ||
| 107 | (Should see partitions vdb of 1G and vdc of 2G) | ||
| 108 | |||
| 109 | $ heat stack-show 8d79a777-2e09-4d06-a04e-8430df341514 | ||
| 110 | |||
| 111 | (Command should return lot of info) | ||
| 112 | |||
| 113 | $ heat resource-list mystack | ||
| 114 | +------------------+------------------------------+-----------------+----------------------+ | ||
| 115 | | resource_name | resource_type | resource_status | updated_time | | ||
| 116 | +------------------+------------------------------+-----------------+----------------------+ | ||
| 117 | | vol_1 | OS::Cinder::Volume | CREATE_COMPLETE | 2014-05-08T16:36:24Z | | ||
| 118 | | vm_2 | OS::Nova::Server | CREATE_COMPLETE | 2014-05-08T16:36:56Z | | ||
| 119 | | vm_1 | OS::Nova::Server | CREATE_COMPLETE | 2014-05-08T16:37:26Z | | ||
| 120 | | vol_2_attachment | OS::Cinder::VolumeAttachment | CREATE_COMPLETE | 2014-05-08T16:37:28Z | | ||
| 121 | | vol_1_attachment | OS::Cinder::VolumeAttachment | CREATE_COMPLETE | 2014-05-08T16:37:31Z | | ||
| 122 | +------------------+------------------------------+-----------------+----------------------+ | ||
| 123 | |||
| 124 | (Should see all 5 resources defined in two_vms_example.template) | ||
| 125 | |||
| 126 | $ heat resource-show mystack vm_1 | ||
| 127 | +------------------------+------------------------------------------------------------------------------------------------------------------------------------+ | ||
| 128 | | Property | Value | | ||
| 129 | +------------------------+------------------------------------------------------------------------------------------------------------------------------------+ | ||
| 130 | | description | | | ||
| 131 | | links | http://128.224.149.168:8004/v1/088d10de18a84442a7a03497e834e2af/stacks/mystack/8d79a777-2e09-4d06-a04e-8430df341514/resources/vm_1 | | ||
| 132 | | | http://128.224.149.168:8004/v1/088d10de18a84442a7a03497e834e2af/stacks/mystack/8d79a777-2e09-4d06-a04e-8430df341514 | | ||
| 133 | | logical_resource_id | vm_1 | | ||
| 134 | | physical_resource_id | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | | ||
| 135 | | required_by | vol_1_attachment | | ||
| 136 | | | vol_2_attachment | | ||
| 137 | | resource_name | vm_1 | | ||
| 138 | | resource_status | CREATE_COMPLETE | | ||
| 139 | | resource_status_reason | state changed | | ||
| 140 | | resource_type | OS::Nova::Server | | ||
| 141 | | updated_time | 2014-05-08T16:37:26Z | | ||
| 142 | +------------------------+------------------------------------------------------------------------------------------------------------------------------------+ | ||
| 143 | |||
| 144 | $ heat template-show mystack | ||
| 145 | |||
| 146 | (The content of this command should be the same as content of two_vms_example.template) | ||
| 147 | |||
| 148 | $ heat event-list mystack | ||
| 149 | +------------------+-----+------------------------+--------------------+----------------------+ | ||
| 150 | | resource_name | id | resource_status_reason | resource_status | event_time | | ||
| 151 | +------------------+-----+------------------------+--------------------+----------------------+ | ||
| 152 | | vm_1 | 700 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | | ||
| 153 | | vm_1 | 704 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:26Z | | ||
| 154 | | vm_2 | 699 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | | ||
| 155 | | vm_2 | 703 | state changed | CREATE_COMPLETE | 2014-05-08T16:36:56Z | | ||
| 156 | | vol_1 | 701 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | | ||
| 157 | | vol_1 | 702 | state changed | CREATE_COMPLETE | 2014-05-08T16:36:24Z | | ||
| 158 | | vol_1_attachment | 706 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:37:27Z | | ||
| 159 | | vol_1_attachment | 708 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:31Z | | ||
| 160 | | vol_2_attachment | 705 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:37:26Z | | ||
| 161 | | vol_2_attachment | 707 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:28Z | | ||
| 162 | +------------------+-----+------------------------+--------------------+----------------------+ | ||
| 163 | |||
| 164 | $ heat action-suspend mystack | ||
| 165 | $ Keep running "heat stack-list" until seeing | ||
| 166 | +--------------------------------------+------------+------------------+----------------------+ | ||
| 167 | | id | stack_name | stack_status | creation_time | | ||
| 168 | +--------------------------------------+------------+------------------+----------------------+ | ||
| 169 | | 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | SUSPEND_COMPLETE | 2014-05-08T16:36:22Z | | ||
| 170 | +--------------------------------------+------------+------------------+----------------------+ | ||
| 171 | |||
| 172 | $ nova list | ||
| 173 | +--------------------------------------+---------------------------+-----------+------------+-------------+----------+ | ||
| 174 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 175 | +--------------------------------------+---------------------------+-----------+------------+-------------+----------+ | ||
| 176 | | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | SUSPENDED | - | Shutdown | | | ||
| 177 | | fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | SUSPENDED | - | Shutdown | | | ||
| 178 | +--------------------------------------+---------------------------+-----------+------------+-------------+----------+ | ||
| 179 | |||
| 180 | (2 VMs are in suspeded mode) | ||
| 181 | |||
| 182 | $ Wait for 2 minutes | ||
| 183 | $ heat action-resume mystack | ||
| 184 | $ Keep running "heat stack-list" until seeing | ||
| 185 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 186 | | id | stack_name | stack_status | creation_time | | ||
| 187 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 188 | | 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | RESUME_COMPLETE | 2014-05-08T16:36:22Z | | ||
| 189 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 190 | |||
| 191 | $ nova list | ||
| 192 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 193 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 194 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 195 | | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | | ||
| 196 | | fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | | ||
| 197 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 198 | |||
| 199 | $ nova show e58da67a-9a22-4db2-9d1e-0a1810df2f2e | grep flavor | ||
| 200 | | flavor | m1.tiny (1) | ||
| 201 | |||
| 202 | (Should see m1.tiny flavour for vm_2 by default) | ||
| 203 | |||
| 204 | $ heat stack-update -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=c46276fb-e3df-4093-b759-2ce03c4cefd5;vm_type=m1.small" mystack | ||
| 205 | (Update vm_2 flavour from m1.tiny to m1.small) | ||
| 206 | |||
| 207 | $ Keep running "heat stack-list" until seeing | ||
| 208 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 209 | | id | stack_name | stack_status | creation_time | | ||
| 210 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 211 | | 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | UPDATE_COMPLETE | 2014-05-08T16:36:22Z | | ||
| 212 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 213 | |||
| 214 | $ nova list | ||
| 215 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 216 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 217 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 218 | | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | | ||
| 219 | | fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | | ||
| 220 | +--------------------------------------+---------------------------+--------+------------+-------------+----------+ | ||
| 221 | |||
| 222 | $ nova show e58da67a-9a22-4db2-9d1e-0a1810df2f2e | grep flavor | ||
| 223 | | flavor | m1.small (2) | | ||
| 224 | |||
| 225 | (Should see m1.small flavour for vm_2. This demonstrates that its able to update) | ||
| 226 | |||
| 227 | $ heat stack-create -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=1d283cf7-2258-4038-a31b-5cb5fba995f3" mystack2 | ||
| 228 | $ Keep running "heat stack-list" until seeing | ||
| 229 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 230 | | id | stack_name | stack_status | creation_time | | ||
| 231 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 232 | | 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | UPDATE_COMPLETE | 2014-05-08T16:36:22Z | | ||
| 233 | | 258e05fd-370b-4017-818a-4bb573ac3982 | mystack2 | CREATE_COMPLETE | 2014-05-08T16:46:12Z | | ||
| 234 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 235 | |||
| 236 | $ nova list | ||
| 237 | +--------------------------------------+----------------------------+--------+------------+-------------+----------+ | ||
| 238 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 239 | +--------------------------------------+----------------------------+--------+------------+-------------+----------+ | ||
| 240 | | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | | ||
| 241 | | fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | | ||
| 242 | | 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | mystack2-vm_1-hmy5feakes2q | ACTIVE | - | Running | | | ||
| 243 | | c80aaea9-276f-4bdb-94d5-9e0c5434f269 | mystack2-vm_2-6b4nzxecda3j | ACTIVE | - | Running | | | ||
| 244 | +--------------------------------------+----------------------------+--------+------------+-------------+----------+ | ||
| 245 | |||
| 246 | $ cinder list | ||
| 247 | +--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 248 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 249 | +--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 250 | | 1d283cf7-2258-4038-a31b-5cb5fba995f3 | in-use | nfs_vol_1 | 3 | nfs | false | 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | | ||
| 251 | | 2175db16-5569-45eb-af8b-2b967e2b808c | in-use | mystack2-vol_1-ey6udylsdqbo | 1 | lvm_iscsi | false | 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | | ||
| 252 | | 58b55eb9-e619-4c2e-bfd8-ebc349aff02b | in-use | mystack-vol_1-5xk2zotoxidr | 1 | lvm_iscsi | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | | ||
| 253 | | c46276fb-e3df-4093-b759-2ce03c4cefd5 | in-use | glusterfs_vol_1 | 2 | glusterfs | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | | ||
| 254 | +--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ | ||
| 255 | |||
| 256 | $ Wait for 5 minutes | ||
| 257 | $ heat stack-delete mystack2 | ||
| 258 | $ heat stack-delete mystack | ||
| 259 | $ Keep running "heat stack-list" until seeing | ||
| 260 | +----+------------+--------------+---------------+ | ||
| 261 | | id | stack_name | stack_status | creation_time | | ||
| 262 | +----+------------+--------------+---------------+ | ||
| 263 | +----+------------+--------------+---------------+ | ||
| 264 | |||
| 265 | $ nova list | ||
| 266 | +----+------+--------+------------+-------------+----------+ | ||
| 267 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 268 | +----+------+--------+------------+-------------+----------+ | ||
| 269 | +----+------+--------+------------+-------------+----------+ | ||
| 270 | |||
| 271 | $ cinder list | ||
| 272 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 273 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 274 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 275 | | 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | | ||
| 276 | | c46276fb-e3df-4093-b759-2ce03c4cefd5 | available | glusterfs_vol_1 | 2 | glusterfs | false | | | ||
| 277 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 278 | |||
| 279 | *** The below test steps are for testing autoscaling *** | ||
| 280 | |||
| 281 | $ By default, Ceilometer data samples are collected every 10 minutes, therefore | ||
| 282 | to speed up the test process, change the data simple polling rates at every | ||
| 283 | 1 minutes. On Controller node modify /etc/ceilometer/pipeline.yaml to change | ||
| 284 | all value "600" to "60". | ||
| 285 | |||
| 286 | *** The following commands are for reseting Ceilometer database *** | ||
| 287 | |||
| 288 | $ /etc/init.d/ceilometer-agent-notification stop | ||
| 289 | $ /etc/init.d/ceilometer-agent-central stop | ||
| 290 | $ /etc/init.d/ceilometer-alarm-evaluator stop | ||
| 291 | $ /etc/init.d/ceilometer-alarm-notifier stop | ||
| 292 | $ /etc/init.d/ceilometer-api stop | ||
| 293 | $ /etc/init.d/ceilometer-collector stop | ||
| 294 | $ sudo -u postgres psql -c "DROP DATABASE ceilometer" | ||
| 295 | $ sudo -u postgres psql -c "CREATE DATABASE ceilometer" | ||
| 296 | $ ceilometer-dbsync | ||
| 297 | $ /etc/init.d/ceilometer-agent-central restart | ||
| 298 | $ /etc/init.d/ceilometer-agent-notification restart | ||
| 299 | $ /etc/init.d/ceilometer-alarm-evaluator restart | ||
| 300 | $ /etc/init.d/ceilometer-alarm-notifier restart | ||
| 301 | $ /etc/init.d/ceilometer-api restart | ||
| 302 | $ /etc/init.d/ceilometer-collector restart | ||
| 303 | |||
| 304 | $ On Compute node, modify /etc/ceilometer/pipeline.yaml to change all | ||
| 305 | value "600" to "60", and run: | ||
| 306 | |||
| 307 | /etc/init.d/ceilometer-agent-compute restart | ||
| 308 | |||
| 309 | $ heat stack-create -f /etc/heat/templates/autoscaling_example.template mystack | ||
| 310 | $ Keep running "heat stack-list" until seeing | ||
| 311 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 312 | | id | stack_name | stack_status | creation_time | | ||
| 313 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 314 | | 1905a541-edfa-4a1f-b6f9-eacdf5f62d85 | mystack | CREATE_COMPLETE | 2014-05-06T22:46:08Z | | ||
| 315 | +--------------------------------------+------------+-----------------+----------------------+ | ||
| 316 | |||
| 317 | $ ceilometer alarm-list | ||
| 318 | +--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ | ||
| 319 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 320 | +--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ | ||
| 321 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | insufficient data | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 322 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | insufficient data | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 323 | +--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ | ||
| 324 | |||
| 325 | $ nova list | ||
| 326 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 327 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 328 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 329 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 330 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 331 | |||
| 332 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 333 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 334 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 335 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 336 | | 39a9cb57-46f0-4f8a-885d-9a645aa467b1 | mystack-cpu_alarm_low-jrjmwxbhzdd3 | alarm | True | True | cpu_util < 15.0 during 1 x 120s | None | | ||
| 337 | | 5dd5747e-a7a7-4ccd-994b-b0b8733728de | mystack-cpu_alarm_high-tpgsvsbcskjt | ok | True | True | cpu_util > 80.0 during 1 x 180s | None | | ||
| 338 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 339 | |||
| 340 | $ Wait for 5 minutes | ||
| 341 | $ ceilometer alarm-list | ||
| 342 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 343 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 344 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 345 | | 39a9cb57-46f0-4f8a-885d-9a645aa467b1 | mystack-cpu_alarm_low-jrjmwxbhzdd3 | alarm | True | True | cpu_util < 15.0 during 1 x 120s | None | | ||
| 346 | | 5dd5747e-a7a7-4ccd-994b-b0b8733728de | mystack-cpu_alarm_high-tpgsvsbcskjt | ok | True | True | cpu_util > 80.0 during 1 x 180s | None | | ||
| 347 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 348 | |||
| 349 | $ On Dashboard, log into 49201b33-1cd9-4e2f-b182-224f29c2bb7c VM console and run | ||
| 350 | |||
| 351 | while [ true ]; do echo "hello world"; done | ||
| 352 | |||
| 353 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 354 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 355 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 356 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 357 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 358 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | alarm | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 359 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 360 | |||
| 361 | $ nova list | ||
| 362 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 363 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 364 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 365 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 366 | | a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | BUILD | spawning | NOSTATE | | | ||
| 367 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 368 | |||
| 369 | (Should see that now heat auto scales up by creating another VM) | ||
| 370 | |||
| 371 | $ nova list | ||
| 372 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 373 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 374 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 375 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 376 | | a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | | ||
| 377 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 378 | |||
| 379 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 380 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 381 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 382 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 383 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 384 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 385 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 386 | |||
| 387 | (Both alarm should be in "ok" state as average cpu_util for 2 VMs should be around 50% | ||
| 388 | which is in the range) | ||
| 389 | |||
| 390 | $ On Dashboard, log into the a1a21353-3400-46be-b75c-8c6a0a74a1de VM console and run | ||
| 391 | |||
| 392 | while [ true ]; do echo "hello world"; done | ||
| 393 | |||
| 394 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 395 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 396 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 397 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 398 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 399 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | alarm | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 400 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 401 | |||
| 402 | $ nova list | ||
| 403 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 404 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 405 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 406 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 407 | | a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | | ||
| 408 | | 88e88660-acde-4998-a2e3-312efbc72447 | mystack-server_group-etzglbdncptt-server_group-2-jaotqnr5x7xb | BUILD | spawning | NOSTATE | | | ||
| 409 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 410 | |||
| 411 | (Should see that now heat auto scales up by creating another VM) | ||
| 412 | |||
| 413 | $ nova list | ||
| 414 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 415 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 416 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 417 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 418 | | a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | | ||
| 419 | | 88e88660-acde-4998-a2e3-312efbc72447 | mystack-server_group-etzglbdncptt-server_group-2-jaotqnr5x7xb | ACTIVE | - | Running | | | ||
| 420 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 421 | |||
| 422 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 423 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 424 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 425 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 426 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 427 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 428 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 429 | |||
| 430 | $ Wait for 5 minutes | ||
| 431 | $ ceilometer alarm-list | ||
| 432 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 433 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 434 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 435 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 436 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 437 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 438 | |||
| 439 | $ On Dashboard, log into the a1a21353-3400-46be-b75c-8c6a0a74a1de VM console and | ||
| 440 | stop the while loop | ||
| 441 | |||
| 442 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 443 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 444 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 445 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 446 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 447 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 448 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 449 | |||
| 450 | $ nova list | ||
| 451 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 452 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 453 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 454 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 455 | | a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | | ||
| 456 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 457 | |||
| 458 | (Heat scales down the VMs by one as cpu_alarm_low is triggered) | ||
| 459 | |||
| 460 | $ Wait for 5 minutes | ||
| 461 | $ ceilometer alarm-list | ||
| 462 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 463 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 464 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 465 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 466 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 467 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 468 | |||
| 469 | $ On Dashboard, log into the 49201b33-1cd9-4e2f-b182-224f29c2bb7c VM console and | ||
| 470 | stop the while loop | ||
| 471 | |||
| 472 | $ Keep running "ceilometer alarm-list" until seeing | ||
| 473 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 474 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 475 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 476 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 477 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 478 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 479 | |||
| 480 | $ nova list | ||
| 481 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 482 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 483 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 484 | | 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | | ||
| 485 | +--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ | ||
| 486 | |||
| 487 | (Heat scales down the VMs by one as cpu_alarm_low is triggered) | ||
| 488 | |||
| 489 | $ Wait for 5 minutes | ||
| 490 | $ ceilometer alarm-list | ||
| 491 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 492 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 493 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 494 | | 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | | ||
| 495 | | edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | | ||
| 496 | +--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ | ||
| 497 | |||
| 498 | $ heat stack-delete 1905a541-edfa-4a1f-b6f9-eacdf5f62d85 | ||
| 499 | $ heat stack-list | ||
| 500 | +----+------------+--------------+---------------+ | ||
| 501 | | id | stack_name | stack_status | creation_time | | ||
| 502 | +----+------------+--------------+---------------+ | ||
| 503 | +----+------------+--------------+---------------+ | ||
| 504 | |||
| 505 | $ nova list | ||
| 506 | +----+------+--------+------------+-------------+----------+ | ||
| 507 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 508 | +----+------+--------+------------+-------------+----------+ | ||
| 509 | +----+------+--------+------------+-------------+----------+ | ||
| 510 | |||
| 511 | $ ceilometer alarm-list | ||
| 512 | +----------+------+-------+---------+------------+-----------------+------------------+ | ||
| 513 | | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | | ||
| 514 | +----------+------+-------+---------+------------+-----------------+------------------+ | ||
| 515 | +----------+------+-------+---------+------------+-----------------+------------------+ | ||
| 516 | |||
| 517 | |||
| 518 | Heat Built-In Unit Tests | ||
| 519 | ========================= | ||
| 520 | |||
| 521 | This section describes how to run Heat and Heat client built-in unit | ||
| 522 | tests which are located at: | ||
| 523 | |||
| 524 | /usr/lib64/python2.7/site-packages/heat/tests | ||
| 525 | /usr/lib64/python2.7/site-packages/heatclient/tests | ||
| 526 | |||
| 527 | To run heat built-in unit test with nosetests: | ||
| 528 | |||
| 529 | $ cd /usr/lib64/python2.7/site-packages/heat | ||
| 530 | $ nosetests -v tests | ||
| 531 | |||
| 532 | ---------------------------------------------------------------------- | ||
| 533 | Ran 1550 tests in 45.770s | ||
| 534 | |||
| 535 | OK | ||
| 536 | |||
| 537 | To run heatclient built-in unit test with nosetests: | ||
| 538 | |||
| 539 | $ cd /usr/lib64/python2.7/site-packages/heatclient/tests | ||
| 540 | $ python -v -m subunit.run discover . | subunit2pyunit | ||
| 541 | |||
| 542 | ---------------------------------------------------------------------- | ||
| 543 | Ran 272 tests in 3.368s | ||
| 544 | |||
| 545 | OK | ||
| 546 | |||
| 547 | Please note that, Python testrunner subunit.run is used instead of | ||
| 548 | nosetests as nosetests is not compatible with testscenarios test | ||
| 549 | framework used by some of the heatclient unit tests. | ||
diff --git a/meta-openstack/Documentation/README.swift b/meta-openstack/Documentation/README.swift new file mode 100644 index 0000000..8429a2e --- /dev/null +++ b/meta-openstack/Documentation/README.swift | |||
| @@ -0,0 +1,444 @@ | |||
| 1 | Summary | ||
| 2 | ======= | ||
| 3 | |||
| 4 | This document is not intended to provide detail of how Swift in general | ||
| 5 | works, but rather it highlights the details of how Swift cluster is | ||
| 6 | setup and OpenStack is configured to allow various Openstack components | ||
| 7 | interact with Swift. | ||
| 8 | |||
| 9 | |||
| 10 | Swift Overview | ||
| 11 | ============== | ||
| 12 | |||
| 13 | Openstack Swift is an object storage service. Clients can access swift | ||
| 14 | objects through RESTful APIs. Swift objects can be grouped into a | ||
| 15 | "container" in which containers are grouped into "account". Each account | ||
| 16 | or container in Swift cluster is represented by a SQLite database which | ||
| 17 | contains information related to that account or container. Each | ||
| 18 | Swift object is just user data input. | ||
| 19 | |||
| 20 | Swift cluster can include a massive number of storage devices. Any Swift | ||
| 21 | storage device can be configured to store container databases and/or | ||
| 22 | account databases and/or objects. Each Swift account database can be | ||
| 23 | identified by tuple (account name). Each Swift container database can | ||
| 24 | be identified by tuple (account name, container name). Each swift object | ||
| 25 | can be identified by tuple (account name, container name, object name). | ||
| 26 | |||
| 27 | Swift uses "ring" static mapping algorithm to identify what storage device | ||
| 28 | hosting account database, container database, or object contain (similar | ||
| 29 | to Ceph uses Crush algorithm to identify what OSD hosting Ceph object). | ||
| 30 | A Swift cluster has 3 rings (account ring, container ring, and object ring) | ||
| 31 | used for finding location of account database, container database, or | ||
| 32 | object file respectively. | ||
| 33 | |||
| 34 | Swift service includes the following core services: proxy-server which | ||
| 35 | provides the RESTful APIs for Swift clients to access; account-server | ||
| 36 | which manages accounts; container-server which manages containers; | ||
| 37 | and object-server which manages objects. | ||
| 38 | |||
| 39 | |||
| 40 | Swift Cluster Setup | ||
| 41 | =================== | ||
| 42 | |||
| 43 | The Swift default cluster is setup to have the followings: | ||
| 44 | |||
| 45 | * All Swift main process services including proxy-server, account-server | ||
| 46 | container-server, object-server run on Controller node. | ||
| 47 | * 3 zones in which each zone has only 1 storage device. | ||
| 48 | The underneath block devices for these 3 storage devices are loopback | ||
| 49 | devices. The size of the backing up loopback files is 2Gbytes by default | ||
| 50 | and can be changed at compile time through variable SWIFT_BACKING_FILE_SIZE. | ||
| 51 | If SWIFT_BACKING_FILE_SIZE="0G" then is for disabling loopback devices | ||
| 52 | and using local filesystem as Swift storage device. | ||
| 53 | * All 3 Swift rings have 2^12 partitions and 2 replicas. | ||
| 54 | |||
| 55 | The Swift default cluster is mainly for demonstration purpose. One might | ||
| 56 | wants to have a different Swift cluster setup than this setup (e.g. using | ||
| 57 | real hardware block device instead of loopback devices). | ||
| 58 | |||
| 59 | The script /etc/swift/swift_setup.sh is provided to ease the task of setting | ||
| 60 | up a complicated Swift cluster. It reads a cluster config file, which describes | ||
| 61 | what storage devices are included in what rings, and constructs the cluster. | ||
| 62 | |||
| 63 | For details of how to use swift_setup.sh and the format of Swift cluster | ||
| 64 | config file please refer to the script's help: | ||
| 65 | |||
| 66 | $ swift_setup.sh | ||
| 67 | |||
| 68 | |||
| 69 | Glance and Swift | ||
| 70 | ================ | ||
| 71 | |||
| 72 | Glance can store images into Swift cluster when "default_store = swift" | ||
| 73 | is set in /etc/glance/glance-api.conf. | ||
| 74 | |||
| 75 | By default "default_store" has value of "file" which tells Glance to | ||
| 76 | store images into local filesystem. "default_store" value can be set | ||
| 77 | during compile time through variable GLANCE_DEFAULT_STORE. | ||
| 78 | |||
| 79 | The following configuration options in /etc/glance/glance-api.conf affect | ||
| 80 | on how glance interacts with Swift cluster: | ||
| 81 | |||
| 82 | swift_store_auth_version = 2 | ||
| 83 | swift_store_auth_address = http://127.0.0.1:5000/v2.0/ | ||
| 84 | swift_store_user = service:glance | ||
| 85 | swift_store_key = password | ||
| 86 | swift_store_container = glance | ||
| 87 | swift_store_create_container_on_put = True | ||
| 88 | swift_store_large_object_size = 5120 | ||
| 89 | swift_store_large_object_chunk_size = 200 | ||
| 90 | swift_enable_snet = False | ||
| 91 | |||
| 92 | With these default settings, the images will be stored under | ||
| 93 | Swift account: "service" tenant ID and Swift cluster container: | ||
| 94 | "glance". | ||
| 95 | |||
| 96 | |||
| 97 | Cinder Backup and Swift | ||
| 98 | ======================= | ||
| 99 | |||
| 100 | Cinder-backup has ability to store volume backup into Swift | ||
| 101 | cluster with the following command: | ||
| 102 | |||
| 103 | $ cinder backup-create <cinder volume ID> | ||
| 104 | |||
| 105 | where <cinder volume ID> is ID of an existing Cinder volume if | ||
| 106 | the configure option "backup_driver" in /etc/cinder/cinder.conf | ||
| 107 | is set to "cinder.backup.drivers.ceph". | ||
| 108 | |||
| 109 | Cinder-backup is not be able to create a backup for any cinder | ||
| 110 | volume which backed by NFS or Glusterfs. This is because NFS | ||
| 111 | and Gluster cinder-volume backend drivers do not support the | ||
| 112 | backup functionality. In other words, only cinder volume | ||
| 113 | backed by lvm-iscsi and ceph-rbd are able to be backed-up | ||
| 114 | by cinder-backup. | ||
| 115 | |||
| 116 | The following configuration options in /etc/cinder/cinder.conf affect | ||
| 117 | on how cinder-backup interacts with Swift cluster: | ||
| 118 | |||
| 119 | backup_swift_url=http://controller:8888/v1/AUTH_ | ||
| 120 | backup_swift_auth=per_user | ||
| 121 | #backup_swift_user=<None> | ||
| 122 | #backup_swift_key=<None> | ||
| 123 | backup_swift_container=cinder-backups | ||
| 124 | backup_swift_object_size=52428800 | ||
| 125 | backup_swift_retry_attempts=3 | ||
| 126 | backup_swift_retry_backoff=2 | ||
| 127 | backup_compression_algorithm=zlib | ||
| 128 | |||
| 129 | With these defaults settings, the tenant ID of the keystone user that | ||
| 130 | runs "cinder backup-create" command will be used as Swift cluster | ||
| 131 | account name, along with "cinder-backups" Swift cluster container name | ||
| 132 | in which the volume backups will be saved into. | ||
| 133 | |||
| 134 | |||
| 135 | Build Configuration Options | ||
| 136 | =========================== | ||
| 137 | |||
| 138 | * Controller build config options: | ||
| 139 | |||
| 140 | --enable-board=intel-xeon-core \ | ||
| 141 | --enable-rootfs=ovp-openstack-controller \ | ||
| 142 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 143 | --with-template=feature/openstack-tests \ | ||
| 144 | --with-layer=meta-cloud-services/meta-openstack-swift-deploy | ||
| 145 | |||
| 146 | * Compute build config options: | ||
| 147 | |||
| 148 | --enable-board=intel-xeon-core \ | ||
| 149 | --enable-rootfs=ovp-openstack-compute \ | ||
| 150 | --enable-addons=wr-ovp-openstack,wr-ovp | ||
| 151 | |||
| 152 | |||
| 153 | Test Steps | ||
| 154 | ========== | ||
| 155 | |||
| 156 | This section describes test steps and expected results to demonstrate that | ||
| 157 | Swift is integrated properly into OpenStack. | ||
| 158 | |||
| 159 | Please note: the following commands are carried on Controller node, unless | ||
| 160 | otherwise explicitly indicated. | ||
| 161 | |||
| 162 | $ Start Controller and Compute node | ||
| 163 | $ . /etc/nova/openrc | ||
| 164 | $ dd if=/dev/urandom of=50M_c1.org bs=1M count=50 | ||
| 165 | $ dd if=/dev/urandom of=50M_c2.org bs=1M count=50 | ||
| 166 | $ dd if=/dev/urandom of=100M_c2.org bs=1M count=100 | ||
| 167 | $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org && swift upload c2 100M_c2.org | ||
| 168 | $ swift list | ||
| 169 | |||
| 170 | c1 | ||
| 171 | c2 | ||
| 172 | |||
| 173 | $ swift stat c1 | ||
| 174 | |||
| 175 | Account: AUTH_4ebc0e00338f405c9267866c6b984e71 | ||
| 176 | Container: c1 | ||
| 177 | Objects: 1 | ||
| 178 | Bytes: 52428800 | ||
| 179 | Read ACL: | ||
| 180 | Write ACL: | ||
| 181 | Sync To: | ||
| 182 | Sync Key: | ||
| 183 | Accept-Ranges: bytes | ||
| 184 | X-Timestamp: 1396457818.76909 | ||
| 185 | X-Trans-Id: tx0564472425ad47128b378-00533c41bb | ||
| 186 | Content-Type: text/plain; charset=utf-8 | ||
| 187 | |||
| 188 | (Should see there is 1 object) | ||
| 189 | |||
| 190 | $ swift stat c2 | ||
| 191 | |||
| 192 | root@controller:~# swift stat c2 | ||
| 193 | Account: AUTH_4ebc0e00338f405c9267866c6b984e71 | ||
| 194 | Container: c2 | ||
| 195 | Objects: 2 | ||
| 196 | Bytes: 157286400 | ||
| 197 | Read ACL: | ||
| 198 | Write ACL: | ||
| 199 | Sync To: | ||
| 200 | Sync Key: | ||
| 201 | Accept-Ranges: bytes | ||
| 202 | X-Timestamp: 1396457826.26262 | ||
| 203 | X-Trans-Id: tx312934d494a44bbe96a00-00533c41cd | ||
| 204 | Content-Type: text/plain; charset=utf-8 | ||
| 205 | |||
| 206 | (Should see there are 2 objects) | ||
| 207 | |||
| 208 | $ swift stat c3 | ||
| 209 | |||
| 210 | Container 'c3' not found | ||
| 211 | |||
| 212 | $ mv 50M_c1.org 50M_c1.save && mv 50M_c2.org 50M_c2.save && mv 100M_c2.org 100M_c2.save | ||
| 213 | $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org | ||
| 214 | $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org | ||
| 215 | |||
| 216 | a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.save | ||
| 217 | a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.org | ||
| 218 | 353233ed20418dbdeeb2fad91ba4c86a 50M_c2.save | ||
| 219 | 353233ed20418dbdeeb2fad91ba4c86a 50M_c2.org | ||
| 220 | 3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.save | ||
| 221 | 3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.org | ||
| 222 | |||
| 223 | (The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) | ||
| 224 | |||
| 225 | $ swift delete c1 50M_c1.org && swift delete c2 50M_c2.org | ||
| 226 | $ swift stat c1 | ||
| 227 | |||
| 228 | Account: AUTH_4ebc0e00338f405c9267866c6b984e71 | ||
| 229 | Container: c1 | ||
| 230 | Objects: 0 | ||
| 231 | Bytes: 0 | ||
| 232 | Read ACL: | ||
| 233 | Write ACL: | ||
| 234 | Sync To: | ||
| 235 | Sync Key: | ||
| 236 | Accept-Ranges: bytes | ||
| 237 | X-Timestamp: 1396457818.77284 | ||
| 238 | X-Trans-Id: tx58e4bb6d06b84276b8d7f-00533c424c | ||
| 239 | Content-Type: text/plain; charset=utf-8 | ||
| 240 | |||
| 241 | (Should see there is no object) | ||
| 242 | |||
| 243 | $ swift stat c2 | ||
| 244 | |||
| 245 | Account: AUTH_4ebc0e00338f405c9267866c6b984e71 | ||
| 246 | Container: c2 | ||
| 247 | Objects: 1 | ||
| 248 | Bytes: 104857600 | ||
| 249 | Read ACL: | ||
| 250 | Write ACL: | ||
| 251 | Sync To: | ||
| 252 | Sync Key: | ||
| 253 | Accept-Ranges: bytes | ||
| 254 | X-Timestamp: 1396457826.25872 | ||
| 255 | X-Trans-Id: txdae8ab2adf4f47a4931ba-00533c425b | ||
| 256 | Content-Type: text/plain; charset=utf-8 | ||
| 257 | |||
| 258 | (Should see there is 1 object) | ||
| 259 | |||
| 260 | $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org | ||
| 261 | $ rm *.org | ||
| 262 | $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org | ||
| 263 | $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org | ||
| 264 | |||
| 265 | 31147c186e7dd2a4305026d3d6282861 50M_c1.save | ||
| 266 | 31147c186e7dd2a4305026d3d6282861 50M_c1.org | ||
| 267 | b9043aacef436dfbb96c39499d54b850 50M_c2.save | ||
| 268 | b9043aacef436dfbb96c39499d54b850 50M_c2.org | ||
| 269 | b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.save | ||
| 270 | b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.org | ||
| 271 | |||
| 272 | (The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) | ||
| 273 | |||
| 274 | $ neutron net-create mynetwork | ||
| 275 | $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 276 | $ glance image-list | ||
| 277 | |||
| 278 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 279 | | ID | Name | Disk Format | Container Format | Size | Status | | ||
| 280 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 281 | | 79f52103-5b22-4aa5-8159-2d146b82b0b2 | myfirstimage | qcow2 | bare | 9761280 | active | | ||
| 282 | +--------------------------------------+--------------+-------------+------------------+---------+--------+ | ||
| 283 | |||
| 284 | $ export OS_TENANT_NAME=service && export OS_USERNAME=glance | ||
| 285 | $ swift list glance | ||
| 286 | |||
| 287 | 79f52103-5b22-4aa5-8159-2d146b82b0b2 | ||
| 288 | |||
| 289 | (The object name in the "glance" container must be the same as glance image id just created) | ||
| 290 | |||
| 291 | $ swift download glance 79f52103-5b22-4aa5-8159-2d146b82b0b2 | ||
| 292 | $ md5sum 79f52103-5b22-4aa5-8159-2d146b82b0b2 /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 293 | |||
| 294 | 50bdc35edb03a38d91b1b071afb20a3c 79f52103-5b22-4aa5-8159-2d146b82b0b2 | ||
| 295 | 50bdc35edb03a38d91b1b071afb20a3c /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 296 | |||
| 297 | (The md5sum of these 2 files must be the same) | ||
| 298 | |||
| 299 | $ ls /etc/glance/images/ | ||
| 300 | (This should be empty) | ||
| 301 | |||
| 302 | $ . /etc/nova/openrc | ||
| 303 | $ nova boot --image myfirstimage --flavor 1 myinstance | ||
| 304 | $ nova list | ||
| 305 | |||
| 306 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 307 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 308 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 309 | | bc9662a0-0dac-4bff-a7fb-b820957c55a4 | myinstance | ACTIVE | - | Running | | | ||
| 310 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 311 | |||
| 312 | $ From dashboard, log into VM console to make sure the VM is really running | ||
| 313 | $ nova delete bc9662a0-0dac-4bff-a7fb-b820957c55a4 | ||
| 314 | $ glance image-delete 79f52103-5b22-4aa5-8159-2d146b82b0b2 | ||
| 315 | $ export OS_TENANT_NAME=service && export OS_USERNAME=glance | ||
| 316 | $ swift list glance | ||
| 317 | |||
| 318 | (Should be empty) | ||
| 319 | |||
| 320 | $ . /etc/nova/openrc && . /etc/cinder/add-cinder-volume-types.sh | ||
| 321 | $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 | ||
| 322 | $ cinder list | ||
| 323 | |||
| 324 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ||
| 325 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 326 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ||
| 327 | | 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 328 | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ||
| 329 | |||
| 330 | $ cinder backup-create 3e388ae0-2e20-42a2-80da-3f9f366cbaed | ||
| 331 | $ cinder backup-list | ||
| 332 | |||
| 333 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 334 | | ID | Volume ID | Status | Name | Size | Object Count | Container | | ||
| 335 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 336 | | 1444f5d0-3a87-40bc-a7a7-f3c672768b6a | 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | None | 1 | 22 | cinder-backups | | ||
| 337 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 338 | |||
| 339 | $ swift list | ||
| 340 | |||
| 341 | c1 | ||
| 342 | c2 | ||
| 343 | cinder-backups | ||
| 344 | |||
| 345 | (Should see new Swift container "cinder-backup") | ||
| 346 | |||
| 347 | $ swift list cinder-backups | ||
| 348 | |||
| 349 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 | ||
| 350 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 | ||
| 351 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 | ||
| 352 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 | ||
| 353 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 | ||
| 354 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 | ||
| 355 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 | ||
| 356 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 | ||
| 357 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 | ||
| 358 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 | ||
| 359 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 | ||
| 360 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 | ||
| 361 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 | ||
| 362 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 | ||
| 363 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 | ||
| 364 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 | ||
| 365 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 | ||
| 366 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 | ||
| 367 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 | ||
| 368 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 | ||
| 369 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 | ||
| 370 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata | ||
| 371 | |||
| 372 | $ reboot | ||
| 373 | $ . /etc/nova/openrc && swift list cinder-backups | ||
| 374 | |||
| 375 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 | ||
| 376 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 | ||
| 377 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 | ||
| 378 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 | ||
| 379 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 | ||
| 380 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 | ||
| 381 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 | ||
| 382 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 | ||
| 383 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 | ||
| 384 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 | ||
| 385 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 | ||
| 386 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 | ||
| 387 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 | ||
| 388 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 | ||
| 389 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 | ||
| 390 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 | ||
| 391 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 | ||
| 392 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 | ||
| 393 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 | ||
| 394 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 | ||
| 395 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 | ||
| 396 | volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata | ||
| 397 | |||
| 398 | $ cinder backup-delete 1444f5d0-3a87-40bc-a7a7-f3c672768b6a | ||
| 399 | $ swift list cinder-backups | ||
| 400 | |||
| 401 | (Should be empty) | ||
| 402 | |||
| 403 | |||
| 404 | Swift Built-In Unit Tests | ||
| 405 | ========================= | ||
| 406 | |||
| 407 | This section describes how to run Swift and Swift client built-in unit | ||
| 408 | tests which are located at: | ||
| 409 | |||
| 410 | /usr/lib64/python2.7/site-packages/swift/test | ||
| 411 | /usr/lib64/python2.7/site-packages/swiftclient/tests | ||
| 412 | |||
| 413 | with nosetests test-runner. Please make sure that the test accounts | ||
| 414 | setting in /etc/swift/test.conf reflects the keystone user accounts | ||
| 415 | setting. | ||
| 416 | |||
| 417 | To run swift built-in unit test with nosetests: | ||
| 418 | |||
| 419 | $ To accommodate the small space of loop dev, | ||
| 420 | modify /etc/swift/swift.conf to have "max_file_size = 5242880" | ||
| 421 | $ /etc/init.d/swift restart | ||
| 422 | $ cd /usr/lib64/python2.7/site-packages/swift | ||
| 423 | $ nosetests -v test | ||
| 424 | |||
| 425 | Ran 1633 tests in 272.930s | ||
| 426 | |||
| 427 | FAILED (errors=5, failures=4) | ||
| 428 | |||
| 429 | To run swiftclient built-in unit test with nosetests: | ||
| 430 | |||
| 431 | $ cd /usr/lib64/python2.7/site-packages/swiftclient | ||
| 432 | $ nosetests -v tests | ||
| 433 | |||
| 434 | Ran 108 tests in 2.277s | ||
| 435 | |||
| 436 | FAILED (failures=1) | ||
| 437 | |||
| 438 | |||
| 439 | References | ||
| 440 | ========== | ||
| 441 | |||
| 442 | * http://docs.openstack.org/developer/swift/deployment_guide.html | ||
| 443 | * http://docs.openstack.org/grizzly/openstack-compute/install/yum/content/ch_installing-openstack-object-storage.html | ||
| 444 | * https://swiftstack.com/openstack-swift/architecture/ | ||
diff --git a/meta-openstack/Documentation/testsystem/README b/meta-openstack/Documentation/testsystem/README new file mode 100644 index 0000000..ddbc51d --- /dev/null +++ b/meta-openstack/Documentation/testsystem/README | |||
| @@ -0,0 +1,116 @@ | |||
| 1 | OpenStack: Minimally Viable Test System | ||
| 2 | |||
| 3 | Usage: | ||
| 4 | <script> [config file] [start|stop|restart] | ||
| 5 | |||
| 6 | This test harness creates a virtual network and the specified virtual | ||
| 7 | domains enabling the user to create a test system for openstack. | ||
| 8 | |||
| 9 | Arguments: | ||
| 10 | config file: this configuration file specifies the test system | ||
| 11 | to create, see below for details | ||
| 12 | start|stop|restart: | ||
| 13 | start - starts specifies test system | ||
| 14 | stop - stops specifies test system | ||
| 15 | restart - reboots specifies test system | ||
| 16 | |||
| 17 | Note: On some systems, there may be issues with restart, to workaround, use start with | ||
| 18 | auto_destroy enabled. | ||
| 19 | |||
| 20 | Virtual Network | ||
| 21 | --------------- | ||
| 22 | |||
| 23 | This test harness creates a virtual network (ops_default) using the | ||
| 24 | network specified in the configuration file. | ||
| 25 | e.g. | ||
| 26 | [main] | ||
| 27 | network: 192.168.122.1 | ||
| 28 | |||
| 29 | The script tries to create the virtual network using virbr0, but if this is | ||
| 30 | in use, then it will retry at virbr1, virbr2, .... etc until it finds one or | ||
| 31 | it gives up after a number of attempts. | ||
| 32 | |||
| 33 | |||
| 34 | Virtual Domains | ||
| 35 | --------------- | ||
| 36 | |||
| 37 | The script then creates a controller using the specified kernel and disk image | ||
| 38 | e.g. | ||
| 39 | [controller] | ||
| 40 | kernel: /root/images/bzImage | ||
| 41 | disk: /root/images/controller.ext3 | ||
| 42 | |||
| 43 | The script then creates compute nodes by using a section header starting with | ||
| 44 | the string "compute" along with kernel(s)/disk image(s). | ||
| 45 | |||
| 46 | e.g. | ||
| 47 | [compute0] | ||
| 48 | kernel: /root/images/bzImage | ||
| 49 | disk: /root/images/compute1.ext3 | ||
| 50 | |||
| 51 | [compute1] | ||
| 52 | kernel: /root/images/bzImage | ||
| 53 | disk: /root/images/compute2.ext3 | ||
| 54 | |||
| 55 | |||
| 56 | IP address assignments | ||
| 57 | ---------------------- | ||
| 58 | There is an auto_assign_ip variable under the section main. | ||
| 59 | ie | ||
| 60 | [main] | ||
| 61 | auto_assign_ip: False | ||
| 62 | |||
| 63 | This value, if True, causes the kernel to be pass ip=dhcp in the kernel's | ||
| 64 | boot parameters. | ||
| 65 | |||
| 66 | If the value is False, the each controller and compute section will be | ||
| 67 | required to have a value defined for ip. | ||
| 68 | |||
| 69 | ie. | ||
| 70 | [compute0] | ||
| 71 | kernel: /root/images/bzImage | ||
| 72 | disk: /root/images/compute1.ext3 | ||
| 73 | ip: 192.168.122.10 | ||
| 74 | |||
| 75 | |||
| 76 | Other | ||
| 77 | ----- | ||
| 78 | |||
| 79 | The configuration file also specifies the emulator to be used | ||
| 80 | for the domains: | ||
| 81 | e.g. | ||
| 82 | [main] | ||
| 83 | emulator: /usr/bin/qemu-system-x86_64 | ||
| 84 | |||
| 85 | The configuration file also specifies an auto_destroy option | ||
| 86 | e.g. | ||
| 87 | [main] | ||
| 88 | auto_destroy: True | ||
| 89 | |||
| 90 | If auto_destroy is enabled (True), if the required controller/compute nodes | ||
| 91 | are running, then the script will automatically destroy the running domains. | ||
| 92 | Otherwise, if disabled (False), then the script will display a message that the | ||
| 93 | domain is active and exit. (auto_destroy is only used when starting systems) | ||
| 94 | |||
| 95 | |||
| 96 | Example configuration file | ||
| 97 | -------------------------- | ||
| 98 | [main] | ||
| 99 | network: 192.168.122.1 | ||
| 100 | emulator: /usr/bin/qemu-system-x86_64 | ||
| 101 | auto_destroy: True | ||
| 102 | auto_assign_ip: True | ||
| 103 | |||
| 104 | [controller] | ||
| 105 | kernel: /root/images/bzImage | ||
| 106 | disk: /root/images/controller.ext3 | ||
| 107 | |||
| 108 | [compute0] | ||
| 109 | kernel: /root/images/bzImage | ||
| 110 | disk: /root/images/compute1.ext3 | ||
| 111 | |||
| 112 | [compute1] | ||
| 113 | kernel: /root/images/bzImage | ||
| 114 | disk: /root/images/compute2.ext3 | ||
| 115 | ------------------------------------------------- | ||
| 116 | |||
diff --git a/meta-openstack/Documentation/testsystem/README.multi-compute b/meta-openstack/Documentation/testsystem/README.multi-compute new file mode 100644 index 0000000..f7e6b4e --- /dev/null +++ b/meta-openstack/Documentation/testsystem/README.multi-compute | |||
| @@ -0,0 +1,150 @@ | |||
| 1 | 0. configure configuration files with auto_destroy enabled and auto_assign_ip disabled and specify the | ||
| 2 | IP addresses of the controller and the compute node as configured during the build | ||
| 3 | |||
| 4 | e.g. if the DHCP is supported | ||
| 5 | |||
| 6 | [main] | ||
| 7 | network: 192.168.7.1 | ||
| 8 | emulator: /usr/bin/qemu-system-x86_64 | ||
| 9 | auto_destroy: True | ||
| 10 | auto_assign_ip: True | ||
| 11 | |||
| 12 | [controller] | ||
| 13 | kernel: $HOME/images/bzImage | ||
| 14 | disk: $HOME/images/controller.ext3 | ||
| 15 | |||
| 16 | [computeA] | ||
| 17 | kernel: $HOME/images/bzImage | ||
| 18 | disk: $HOME/images/compute.ext3 | ||
| 19 | |||
| 20 | [computeB] | ||
| 21 | kernel: $HOME/images/bzImage | ||
| 22 | disk: $HOME/images/computeB.ext3 | ||
| 23 | |||
| 24 | Start instances: | ||
| 25 | <launch.py> <config file> start | ||
| 26 | |||
| 27 | e.g. if the IP address are specified in build time IP | ||
| 28 | |||
| 29 | For the controller: | ||
| 30 | |||
| 31 | The build time IP in layers/meta-cloud-services - | ||
| 32 | layers//meta-cloud-services/meta-openstack-controller-deploy/classes/hosts.bbclass | ||
| 33 | layers//meta-cloud-services/meta-openstack-compute-deploy/classes/hosts.bbclass | ||
| 34 | CONTROLLER_IP ?= "128.224.149.121" | ||
| 35 | |||
| 36 | Use the controller's ip in the test system's configuration file: | ||
| 37 | |||
| 38 | [controller] | ||
| 39 | ip: 128.224.149.121 | ||
| 40 | |||
| 41 | For each compute, use the controller's IP in the bitbake build's build time IP and | ||
| 42 | use the compute node's ip accordingly | ||
| 43 | |||
| 44 | computeA | ||
| 45 | The build time IP in layers/meta-cloud-services - | ||
| 46 | layers//meta-cloud-services/meta-openstack-controller-deploy/classes/hosts.bbclass | ||
| 47 | layers//meta-cloud-services/meta-openstack-compute-deploy/classes/hosts.bbclass | ||
| 48 | CONTROLLER_IP ?= "128.224.149.121" | ||
| 49 | COMPUTE_IP ?= "128.224.149.122" | ||
| 50 | |||
| 51 | computeB | ||
| 52 | The build time IP in layers/meta-cloud-services - | ||
| 53 | layers//meta-cloud-services/meta-openstack-controller-deploy/classes/hosts.bbclass | ||
| 54 | layers//meta-cloud-services/meta-openstack-compute-deploy/classes/hosts.bbclass | ||
| 55 | CONTROLLER_IP ?= "128.224.149.121" | ||
| 56 | COMPUTE_IP ?= "128.224.149.123" | ||
| 57 | |||
| 58 | And in the test system's configuration file: | ||
| 59 | |||
| 60 | [controller] | ||
| 61 | ip: 128.224.149.121 | ||
| 62 | |||
| 63 | [computeA] | ||
| 64 | ip: 128.224.149.122 | ||
| 65 | |||
| 66 | [computeB] | ||
| 67 | ip: 128.224.149.123 | ||
| 68 | |||
| 69 | Start instances: | ||
| 70 | <launch.py> <config file> start | ||
| 71 | |||
| 72 | |||
| 73 | 1./etc/hosts - adjust for hostnames | ||
| 74 | |||
| 75 | On controller/compute nodes, configure your DNS or /etc/hosts and ensure | ||
| 76 | it is consistent across all hosts. Make sure that the three hosts can | ||
| 77 | perform name resolution with each other. As a test, use the ping command | ||
| 78 | to ping each host from one another. | ||
| 79 | |||
| 80 | $ ping HostA | ||
| 81 | $ ping HostB | ||
| 82 | $ ping HostC | ||
| 83 | |||
| 84 | e.g. /etc/hosts | ||
| 85 | 127.0.0.1 localhost.localdomain localhost | ||
| 86 | |||
| 87 | 192.168.7.2 controller | ||
| 88 | 192.168.7.4 computeA | ||
| 89 | 192.168.7.6 computeB | ||
| 90 | |||
| 91 | 2. Configure NFS host on controller | ||
| 92 | |||
| 93 | /etc/nova/instances needs to be a shared directory for migration to work. | ||
| 94 | |||
| 95 | Configure the controller to export this as an NFS export. | ||
| 96 | |||
| 97 | cat >> /etc/exports << 'EOF' | ||
| 98 | /etc/nova/instances *(rw,no_subtree_check,insecure,no_root_squash) | ||
| 99 | EOF | ||
| 100 | exportfs -a | ||
| 101 | |||
| 102 | On compute nodes: | ||
| 103 | mount controller:/etc/nova/instances /etc/nova/instances/ | ||
| 104 | |||
| 105 | |||
| 106 | 3. Make sure the controller can see the compute nodes | ||
| 107 | |||
| 108 | nova service-list | ||
| 109 | |||
| 110 | root@controller:/etc/nova/instances# nova service-list | ||
| 111 | +------------------+------------+----------+---------+-------+----------------------------+-----------------+ | ||
| 112 | | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | | ||
| 113 | +------------------+------------+----------+---------+-------+----------------------------+-----------------+ | ||
| 114 | | nova-compute | computeA | nova | enabled | up | 2014-05-16T17:14:24.617143 | - | | ||
| 115 | | nova-compute | computeB | nova | enabled | up | 2014-05-16T17:14:25.228089 | - | | ||
| 116 | | nova-conductor | controller | internal | enabled | up | 2014-05-16T17:14:26.932751 | - | | ||
| 117 | | nova-scheduler | controller | internal | enabled | up | 2014-05-16T17:14:26.984656 | - | | ||
| 118 | | nova-consoleauth | controller | internal | enabled | up | 2014-05-16T17:14:27.007947 | - | | ||
| 119 | | nova-cert | controller | internal | enabled | up | 2014-05-16T17:14:27.030701 | - | | ||
| 120 | | nova-network | controller | internal | enabled | up | 2014-05-16T17:14:27.031366 | - | | ||
| 121 | +------------------+------------+----------+---------+-------+----------------------------+-----------------+ | ||
| 122 | |||
| 123 | root@controller:~# nova hypervisor-list | ||
| 124 | +----+---------------------+ | ||
| 125 | | ID | Hypervisor hostname | | ||
| 126 | +----+---------------------+ | ||
| 127 | | 1 | computeA | | ||
| 128 | | 2 | computeB | | ||
| 129 | +----+---------------------+ | ||
| 130 | |||
| 131 | Login to horizon, and select hypervisors, both nodes will be seen | ||
| 132 | |||
| 133 | |||
| 134 | 4. Bootup a guest from the controller: | ||
| 135 | |||
| 136 | On controller: | ||
| 137 | glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 --file images/cirros-0.3.0-x86_64-disk.img | ||
| 138 | neutron net-create mynetwork | ||
| 139 | nova boot --image myFirstImage --flavor 1 myinstance | ||
| 140 | |||
| 141 | 5. Do migration from horizon | ||
| 142 | |||
| 143 | From horizon, goto instances, myinstance should be running | ||
| 144 | wait til myinstance is in running state | ||
| 145 | |||
| 146 | In Actions, select: Migrate Instance | ||
| 147 | Select: Confirm migrate/resize when promted | ||
| 148 | |||
| 149 | myinstance is now running from the other compute node (computeB in this case) | ||
| 150 | |||
diff --git a/meta-openstack/Documentation/testsystem/README.tests b/meta-openstack/Documentation/testsystem/README.tests new file mode 100644 index 0000000..924f883 --- /dev/null +++ b/meta-openstack/Documentation/testsystem/README.tests | |||
| @@ -0,0 +1,9 @@ | |||
| 1 | This test system enables the user to run different tests on the system. | ||
| 2 | |||
| 3 | Multiple compute node testing can be performed as per: README.multi-compute | ||
| 4 | |||
| 5 | Other tests described in Documentation.ND: | ||
| 6 | ie. | ||
| 7 | Ceph testing: Documentation.ND/README.ceph-openstack | ||
| 8 | Swift/cinder: Documentation.ND/README.swift | ||
| 9 | |||
diff --git a/meta-openstack/Documentation/testsystem/launch.py b/meta-openstack/Documentation/testsystem/launch.py new file mode 100755 index 0000000..e177773 --- /dev/null +++ b/meta-openstack/Documentation/testsystem/launch.py | |||
| @@ -0,0 +1,304 @@ | |||
| 1 | #!/usr/bin/env python | ||
| 2 | |||
| 3 | import sys | ||
| 4 | import grp | ||
| 5 | import pwd | ||
| 6 | import os | ||
| 7 | import libvirt | ||
| 8 | import ConfigParser | ||
| 9 | import subprocess | ||
| 10 | import shutil | ||
| 11 | import distutils.spawn | ||
| 12 | import platform | ||
| 13 | |||
| 14 | # this does a very basic test to see if the required packages | ||
| 15 | # are installed, extend list as required | ||
| 16 | def checkPackages(): | ||
| 17 | sys_ok = True | ||
| 18 | check_apps = [ "virsh", "qemu-system-x86_64", "libvirtd" ] | ||
| 19 | for app in check_apps: | ||
| 20 | if distutils.spawn.find_executable(app) == None: | ||
| 21 | print( "Missing: " + app) | ||
| 22 | sys_ok = False | ||
| 23 | if not sys_ok: | ||
| 24 | print("The required libvirt/qemu packages are missing...") | ||
| 25 | distro = platform.dist()[0] | ||
| 26 | if distro == "debian" or distro == "Ubuntu": | ||
| 27 | print( "This appears to be a Debian/Ubuntu distribution\nPlease install " + | ||
| 28 | "packages like libvirt-bin, qemu-system-x86,..." ) | ||
| 29 | elif distro == "redhat" or distro == "fedora": | ||
| 30 | print( "This appears to be a Redhat/Fedora distribution\nPlease install " + | ||
| 31 | "packages like libvirt-client, libvirt-daemon, qemu-system-x86, ..." ) | ||
| 32 | exit(1) | ||
| 33 | return | ||
| 34 | |||
| 35 | def networkInterfaces(): | ||
| 36 | ifaces = [] | ||
| 37 | for line in open('/proc/net/dev', 'r'): | ||
| 38 | if_info = line.split(":", 1) | ||
| 39 | if len(if_info) > 1: | ||
| 40 | ifaces.append( if_info[0].strip() ) | ||
| 41 | return ifaces | ||
| 42 | |||
| 43 | def destroyNetwork(conn, network_name): | ||
| 44 | networks = conn.listNetworks() + conn.listDefinedNetworks() | ||
| 45 | if network_name in networks: | ||
| 46 | try: | ||
| 47 | nw = conn.networkLookupByName(network_name) | ||
| 48 | if nw.isActive(): | ||
| 49 | nw.destroy() | ||
| 50 | nw.undefine() | ||
| 51 | except: | ||
| 52 | print( "Failed to destroy network: %s" % network_name ) | ||
| 53 | exit( 1 ) | ||
| 54 | |||
| 55 | def restartDomain(conn, domain_name): | ||
| 56 | try: | ||
| 57 | domain = conn.lookupByName(domain_name) | ||
| 58 | except: | ||
| 59 | print( "restartDomain: Warning domain " + domain_name + " doesn't exist." ) | ||
| 60 | return | ||
| 61 | if domain.isActive(): | ||
| 62 | domain.reboot() | ||
| 63 | |||
| 64 | def destroyDomain(conn, auto_destroy, domain_name): | ||
| 65 | try: | ||
| 66 | domain = conn.lookupByName(domain_name) | ||
| 67 | except: | ||
| 68 | return | ||
| 69 | if domain.isActive(): | ||
| 70 | if auto_destroy: | ||
| 71 | print( "Auto destroy enabled, destroying old instance of domain %s" % domain_name ) | ||
| 72 | domain.destroy() | ||
| 73 | else: | ||
| 74 | print( "Domain %s is active, abort..." % domain_name ) | ||
| 75 | print( "To stop: virsh -c %s destroy %s " % ( uri , domain_name ) ) | ||
| 76 | exit(0) | ||
| 77 | domain.undefine() | ||
| 78 | |||
| 79 | def startDomain(conn, auto_destroy, domain_name, xml_desc): | ||
| 80 | print( "Starting %s...\n%s" % ( domain_name, xml_desc ) ) | ||
| 81 | destroyDomain(conn, auto_destroy, domain_name) | ||
| 82 | try: | ||
| 83 | conn.defineXML(xml_desc) | ||
| 84 | domain = conn.lookupByName(domain_name) | ||
| 85 | domain.create() | ||
| 86 | print( "Starting domain %s..." % domain_name ) | ||
| 87 | print( "To connect to the console: virsh -c %s console %s" % ( uri, domain_name ) ) | ||
| 88 | print( "To stop: virsh -c %s destroy %s" % ( uri, domain_name ) ) | ||
| 89 | except Exception as e: | ||
| 90 | print( e ) | ||
| 91 | exit(1) | ||
| 92 | |||
| 93 | def make_nw_spec(network_name, bridge_nw_interface, network, auto_assign_ip): | ||
| 94 | spec = '<network>' | ||
| 95 | spec += '<name>' + network_name + '</name>' | ||
| 96 | spec += '<bridge name="' + bridge_nw_interface + '"/>' | ||
| 97 | spec += '<forward/>' | ||
| 98 | spec += '<ip address="' + network + '" netmask="255.255.255.0">' | ||
| 99 | if auto_assign_ip: | ||
| 100 | nw_parts = network.split('.') | ||
| 101 | nw_parts[-1] = "2" | ||
| 102 | start_dhcp = '.'.join(nw_parts) | ||
| 103 | nw_parts[-1] = "254" | ||
| 104 | end_dhcp = '.'.join(nw_parts) | ||
| 105 | spec += '<dhcp>' | ||
| 106 | spec += '<range start="' + start_dhcp + '" end="' + end_dhcp + '"/>' | ||
| 107 | spec += '</dhcp>' | ||
| 108 | spec += '</ip>' | ||
| 109 | spec += '</network>' | ||
| 110 | return spec | ||
| 111 | |||
| 112 | def make_spec(name, network, kernel, disk, bridge_nw_interface, emulator, auto_assign_ip, ip): | ||
| 113 | if not os.path.exists(kernel): | ||
| 114 | print( "Kernel image %s does not exist!" % kernel ) | ||
| 115 | exit(1) | ||
| 116 | if not os.path.exists(disk): | ||
| 117 | print( "Disk %s does not exist!" % disk ) | ||
| 118 | exit(1) | ||
| 119 | if auto_assign_ip: | ||
| 120 | ip_spec = 'dhcp' | ||
| 121 | else: | ||
| 122 | ip_spec = ip + '::' + network + ':255.255.255.0:' + name + ':eth0:off' | ||
| 123 | spec = '<domain type=\'kvm\'>' | ||
| 124 | spec += ' <name>' + name + '</name>' | ||
| 125 | spec += ' <memory>4096000</memory>' | ||
| 126 | spec += ' <currentMemory>4096000</currentMemory>' | ||
| 127 | spec += ' <vcpu cpuset=\'1\'>1</vcpu>' | ||
| 128 | spec += ' <cpu>' | ||
| 129 | spec += ' <model>kvm64</model>' | ||
| 130 | spec += ' </cpu>' | ||
| 131 | spec += ' <os>' | ||
| 132 | spec += ' <type arch=\'x86_64\' machine=\'pc\'>hvm</type>' | ||
| 133 | spec += ' <kernel>' + kernel + '</kernel>' | ||
| 134 | spec += ' <boot dev=\'hd\'/>' | ||
| 135 | spec += ' <cmdline>root=/dev/vda rw console=ttyS0 ip=' + ip_spec + '</cmdline>' | ||
| 136 | spec += ' </os>' | ||
| 137 | spec += ' <features>' | ||
| 138 | spec += ' <acpi/>' | ||
| 139 | spec += ' <apic/>' | ||
| 140 | spec += ' <pae/>' | ||
| 141 | spec += ' </features>' | ||
| 142 | spec += ' <clock offset=\'utc\'/>' | ||
| 143 | spec += ' <on_poweroff>destroy</on_poweroff>' | ||
| 144 | # spec += ' <on_reboot>destroy</on_reboot>' | ||
| 145 | spec += ' <on_crash>destroy</on_crash>' | ||
| 146 | spec += ' <devices>' | ||
| 147 | spec += ' <emulator>' + emulator + '</emulator>' | ||
| 148 | spec += ' <disk type=\'file\' device=\'disk\'>' | ||
| 149 | spec += ' <source file=\'' + disk + '\'/>' | ||
| 150 | spec += ' <target dev=\'vda\' bus=\'virtio\'/>' | ||
| 151 | spec += ' </disk>' | ||
| 152 | spec += ' <interface type=\'bridge\'>' | ||
| 153 | spec += ' <source bridge=\'' + bridge_nw_interface + '\'/>' | ||
| 154 | spec += ' <model type=\'virtio\' />' | ||
| 155 | spec += ' </interface>' | ||
| 156 | spec += ' <serial type=\'pty\'>' | ||
| 157 | spec += ' <target port=\'0\'/>' | ||
| 158 | spec += ' <alias name=\'serial0\'/>' | ||
| 159 | spec += ' </serial>' | ||
| 160 | spec += ' <console type=\'pty\'>' | ||
| 161 | spec += ' <target type=\'serial\' port=\'0\'/>' | ||
| 162 | spec += ' <alias name=\'serial0\'/>' | ||
| 163 | spec += ' </console>' | ||
| 164 | spec += ' </devices>' | ||
| 165 | spec += '</domain>' | ||
| 166 | return spec | ||
| 167 | |||
| 168 | def getConfig(config, section, key): | ||
| 169 | try: | ||
| 170 | return os.path.expandvars(config.get(section, key)) | ||
| 171 | except: | ||
| 172 | print( "Configuration file error! Missing item (section: %s, key: %s)" % ( section, key ) ) | ||
| 173 | exit(1) | ||
| 174 | |||
| 175 | # does the user have access to libvirt? | ||
| 176 | eligible_groups = [ "libvirt", "libvirtd" ] | ||
| 177 | eligible_user = False | ||
| 178 | euid = os.geteuid() | ||
| 179 | if euid == 0: | ||
| 180 | eligible_user = True | ||
| 181 | else: | ||
| 182 | username = pwd.getpwuid(euid)[0] | ||
| 183 | groups = [g.gr_name for g in grp.getgrall() if username in g.gr_mem] | ||
| 184 | for v in eligible_groups: | ||
| 185 | if v in groups: | ||
| 186 | eligible_user = True | ||
| 187 | |||
| 188 | checkPackages() | ||
| 189 | |||
| 190 | if not eligible_user: | ||
| 191 | sys.stderr.write("You need to be the 'root' user or in group [" + '|'.join(eligible_groups) + "] to run this script.\n") | ||
| 192 | exit(1) | ||
| 193 | |||
| 194 | if len(sys.argv) != 3: | ||
| 195 | sys.stderr.write("Usage: "+sys.argv[0]+" [config file] [start|stop|restart]\n") | ||
| 196 | sys.exit(1) | ||
| 197 | |||
| 198 | if not os.path.exists(sys.argv[1]): | ||
| 199 | sys.stderr.write("Error: config file \"" + sys.argv[1] + "\" was not found!\n") | ||
| 200 | sys.exit(1) | ||
| 201 | |||
| 202 | command = sys.argv[2] | ||
| 203 | command_options = ["start", "stop", "restart"] | ||
| 204 | if not command in command_options: | ||
| 205 | sys.stderr.write("Usage: "+sys.argv[0]+" [config file] [start|stop|restart]\n") | ||
| 206 | sys.exit(1) | ||
| 207 | |||
| 208 | Config = ConfigParser.ConfigParser() | ||
| 209 | Config.read(sys.argv[1]) | ||
| 210 | |||
| 211 | network_addr = getConfig(Config, "main", "network") | ||
| 212 | getConfig(Config, "main", "auto_destroy") | ||
| 213 | auto_destroy = Config.getboolean("main", "auto_destroy") | ||
| 214 | getConfig(Config, "main", "auto_assign_ip") | ||
| 215 | auto_assign_ip = Config.getboolean("main", "auto_assign_ip") | ||
| 216 | network_name = 'ops_default' | ||
| 217 | uri = 'qemu:///system' | ||
| 218 | |||
| 219 | # Connect to libvirt | ||
| 220 | conn = libvirt.open(uri) | ||
| 221 | if conn is None: | ||
| 222 | print( "Failed to open connection to the hypervisor" ) | ||
| 223 | exit(1) | ||
| 224 | |||
| 225 | if command == "start": | ||
| 226 | destroyNetwork(conn, network_name) | ||
| 227 | |||
| 228 | # Change the default bridge device from virbr0 to virbr%d. | ||
| 229 | # This will make libvirt try virbr0, virbr1, etc. until it finds a free one. | ||
| 230 | cnt = 0 | ||
| 231 | ifaces = networkInterfaces() | ||
| 232 | found_virbr = False | ||
| 233 | while found_virbr == False: | ||
| 234 | if cnt > 254: | ||
| 235 | print( "Giving up on looking for a free virbr network interface!" ) | ||
| 236 | exit(1) | ||
| 237 | bridge_nw_interface = 'virbr' + str(cnt) | ||
| 238 | if bridge_nw_interface not in ifaces: | ||
| 239 | print( "bridge_nw_interface: %s" % bridge_nw_interface ) | ||
| 240 | network_spec = make_nw_spec(network_name, bridge_nw_interface, network_addr, auto_assign_ip) | ||
| 241 | try: | ||
| 242 | conn.networkDefineXML(network_spec) | ||
| 243 | nw = conn.networkLookupByName(network_name) | ||
| 244 | nw.create() | ||
| 245 | found_virbr = True | ||
| 246 | except: | ||
| 247 | print( "Network Name: %s" % network_name ) | ||
| 248 | destroyNetwork( conn, network_name ) | ||
| 249 | print( "Error creating network interface" ) | ||
| 250 | cnt += 1 | ||
| 251 | else: | ||
| 252 | # verify network exists | ||
| 253 | try: | ||
| 254 | nw = conn.networkLookupByName(network_name) | ||
| 255 | except: | ||
| 256 | print( "Error! Virtual network " + network_name + " is not defined!" ) | ||
| 257 | exit(1) | ||
| 258 | if not nw.isActive(): | ||
| 259 | print( "Error! Virtual network " + network_name + " is not running!" ) | ||
| 260 | exit(1) | ||
| 261 | |||
| 262 | emulator = getConfig(Config, "main", "emulator") | ||
| 263 | if not os.path.exists(emulator): | ||
| 264 | print( "Emulator %s does not exist!" % emulator ) | ||
| 265 | exit(1) | ||
| 266 | |||
| 267 | controller_name = 'controller' | ||
| 268 | if command == "start": | ||
| 269 | # Define the controller xml | ||
| 270 | controller_kernel = getConfig(Config, "controller", "kernel") | ||
| 271 | controller_disk = getConfig(Config, "controller", "disk") | ||
| 272 | |||
| 273 | controller_ip = None | ||
| 274 | if not auto_assign_ip: | ||
| 275 | controller_ip = getConfig(Config, "controller", "ip") | ||
| 276 | controller_spec = make_spec(controller_name, network_addr, controller_kernel, | ||
| 277 | controller_disk, bridge_nw_interface, emulator, | ||
| 278 | auto_assign_ip, controller_ip) | ||
| 279 | |||
| 280 | # Now that network is setup let's actually run the virtual images | ||
| 281 | startDomain(conn, auto_destroy, controller_name, controller_spec) | ||
| 282 | elif command == "stop": | ||
| 283 | destroyDomain(conn, True, controller_name) | ||
| 284 | elif command == "restart": | ||
| 285 | restartDomain(conn, controller_name) | ||
| 286 | |||
| 287 | for i in Config.sections(): | ||
| 288 | if i.startswith("compute"): | ||
| 289 | if command == "start": | ||
| 290 | # Define the compute xml | ||
| 291 | kernel = getConfig(Config, i, "kernel") | ||
| 292 | disk = getConfig(Config, i, "disk") | ||
| 293 | compute_ip = None | ||
| 294 | if not auto_assign_ip: | ||
| 295 | compute_ip = getConfig(Config, i, "ip") | ||
| 296 | spec = make_spec(i, network_addr, kernel, disk, bridge_nw_interface, | ||
| 297 | emulator, auto_assign_ip, compute_ip) | ||
| 298 | startDomain(conn, auto_destroy, i, spec) | ||
| 299 | elif command == "stop": | ||
| 300 | destroyDomain(conn, True, i) | ||
| 301 | elif command == "restart": | ||
| 302 | restartDomain(conn, i) | ||
| 303 | |||
| 304 | conn.close() | ||
diff --git a/meta-openstack/Documentation/testsystem/sample.cfg b/meta-openstack/Documentation/testsystem/sample.cfg new file mode 100644 index 0000000..60154cf --- /dev/null +++ b/meta-openstack/Documentation/testsystem/sample.cfg | |||
| @@ -0,0 +1,15 @@ | |||
| 1 | [main] | ||
| 2 | network: 192.168.122.1 | ||
| 3 | emulator: /usr/bin/qemu-system-x86_64 | ||
| 4 | auto_destroy: True | ||
| 5 | auto_assign_ip: False | ||
| 6 | |||
| 7 | [controller] | ||
| 8 | kernel: /root/images/bzImage | ||
| 9 | disk: /root/images/controller.ext3 | ||
| 10 | ip: 192.168.122.2 | ||
| 11 | |||
| 12 | [compute0] | ||
| 13 | kernel: /root/images/bzImage | ||
| 14 | disk: /root/images/compute.ext3 | ||
| 15 | ip: 192.168.122.3 | ||
