From fb1d6f23fa01c0217ed3f6778d8033dd0030db2a Mon Sep 17 00:00:00 2001 From: Amy Fong Date: Wed, 21 May 2014 14:35:15 -0400 Subject: Testing documentation Add documentation for testing swift, ceph, heat. Create a script and instructions on a script that launches a controller and a specified number of compute nodes. Signed-off-by: Amy Fong --- meta-openstack/Documentation/README.ceph-openstack | 592 +++++++++++++++++++++ meta-openstack/Documentation/README.heat | 549 +++++++++++++++++++ meta-openstack/Documentation/README.swift | 444 ++++++++++++++++ meta-openstack/Documentation/testsystem/README | 116 ++++ .../Documentation/testsystem/README.multi-compute | 150 ++++++ .../Documentation/testsystem/README.tests | 9 + meta-openstack/Documentation/testsystem/launch.py | 304 +++++++++++ meta-openstack/Documentation/testsystem/sample.cfg | 15 + 8 files changed, 2179 insertions(+) create mode 100644 meta-openstack/Documentation/README.ceph-openstack create mode 100644 meta-openstack/Documentation/README.heat create mode 100644 meta-openstack/Documentation/README.swift create mode 100644 meta-openstack/Documentation/testsystem/README create mode 100644 meta-openstack/Documentation/testsystem/README.multi-compute create mode 100644 meta-openstack/Documentation/testsystem/README.tests create mode 100755 meta-openstack/Documentation/testsystem/launch.py create mode 100644 meta-openstack/Documentation/testsystem/sample.cfg (limited to 'meta-openstack/Documentation') diff --git a/meta-openstack/Documentation/README.ceph-openstack b/meta-openstack/Documentation/README.ceph-openstack new file mode 100644 index 0000000..8f11f2d --- /dev/null +++ b/meta-openstack/Documentation/README.ceph-openstack @@ -0,0 +1,592 @@ +Summary +======= + +This document is not intended to provide detail of how Ceph in general works +(please refer to addons/wr-ovp/layers/ovp/Documentation/README_ceph.pdf +document for such a detail), but rather, it highlights the details of how +Ceph cluster is setup and OpenStack is configured to allow various Openstack +components interact with Ceph. + + +Ceph Cluster Setup +================== + +By default Ceph cluster is setup to have the followings: + +* Ceph monitor daemon running on Controller node +* Two Ceph OSD (osd.0 and osd.1) daemons running on Controller node. + The underneath block devices for these 2 OSDs are loopback block devices. + The size of the backing up loopback files is 10Gbytes by default and can + be changed at compile time through variable CEPH_BACKING_FILE_SIZE. +* No Ceph MSD support +* Cephx authentication is enabled + +This is done through script /etc/init.d/ceph-setup. This script is +run when system is booting up. Therefore, Ceph cluster should ready +for use after booting, and no additional manual step is required. + +With current Ceph setup, only Controller node is able to run Ceph +commands which requires Ceph admin installed (file /etc/ceph/ceph.client.admin.keyring +exists). If it's desired to have node other than Controller (e.g. Compute node) +to be able to run Ceph command, then keyring for at a particular Ceph client +must be created and transfered from Controller node to that node. There is a +convenient tool for doing so in a secure manner. On Controller node, run: + + $ /etc/ceph/ceph_xfer_keyring.sh -h + $ /etc/ceph/ceph_xfer_keyring.sh [remote location] + +The way Ceph cluster is setup is mainly for demonstration purpose. One might +wants to have a different Ceph cluster setup than this setup (e.g. using real +hardware block devivce instead of loopback devices). + + +Setup Ceph's Pool and Client Users To Be Used By OpenStack +========================================================== + +After Ceph cluster is up and running, some specific Ceph pools and +Ceph client users must be created in order for Openstack components +to be able to use Ceph. + +* Openstack cinder-volume component requires "cinder-volumes" pool + and "cinder-volume" client exist. +* Openstack cinder-backups component requires "cinder-backups" pool + and "cinder-backup" client exist +* Openstack Glance component requires "images" pool and "glance" + client exist +* Openstack nova-compute component required "cinder-volumes" pool + and "cinder-volume" client exist. + +After system is booted up, all of these required pools and clients +are created automatically through script /etc/ceph/ceph-openstack-setup.sh. + + +Cinder-volume and Ceph +====================== + +Cinder-volume supports multiple backends including Ceph Rbd. When a volume +is created with "--volume_type cephrbd" + + $ cinder create --volume_type cephrbd --display_name glusterfs_vol_1 1 + +where "cephrbd" type can be created as following: + + $ cinder type-create cephrbd + $ cinder type-key cephrbd set volume_backend_name=RBD_CEPH + +then Cinder-volume Ceph backend driver will store volume into Ceph's pool +named "cinder-volumes". + +On controller node, to list what is in "cinder-volumes" pool: + + $ rbd -p cinder-volumes ls + volume-b5294a0b-5c92-4b2f-807e-f49c5bc1896b + +The following configuration options in /etc/cinder/cinder.conf +affect on how cinder-volume interact with Ceph cluster through +cinder-volume ceph backend + + volume_driver=cinder.volume.drivers.rbd.RBDDriver + rbd_pool=cinder-volumes + rbd_ceph_conf=/etc/ceph/ceph.conf + rbd_flatten_volume_from_snapshot=false + rbd_max_clone_depth=5 + rbd_user=cinder-volume + volume_backend_name=RBD_CEPH + + +Cinder-backup and Ceph +====================== + +Cinder-backup has ability to store volume backup into Ceph +"volume-backups" pool with the following command: + + $ cinder backup-create + +where is ID of an existing Cinder volume. + +Cinder-backup is not be able to create a backup for any cinder +volume which backed by NFS or Glusterfs. This is because NFS +and Gluster cinder-volume backend drivers do not support the +backup functionality. In other words, only cinder volume +backed by lvm-iscsi and ceph-rbd are able to be backed-up +by cinder-backup. + +On controller node, to list what is in "cinder-backups" pool: + + $ rbd -p "cinder-backups" ls + +The following configuration options in /etc/cinder/cinder.conf affect +on how cinder-backup interacts with Ceph cluster: + + backup_driver=cinder.backup.drivers.ceph + backup_ceph_conf=/etc/ceph/ceph.conf + backup_ceph_user=cinder-backup + backup_ceph_chunk_size=134217728 + backup_ceph_pool=cinder-backups + backup_ceph_stripe_unit=0 + backup_ceph_stripe_count=0 + restore_discard_excess_bytes=true + + +Glance and Ceph +=============== + +Glance can store images into Ceph pool "images" when "default_store = rbd" +is set in /etc/glance/glance-api.conf. + +By default "default_store" has value of "file" which tells Glance to +store images into local filesystem. "default_store" value can be set +during compile time through variable GLANCE_DEFAULT_STORE. + +The following configuration options in /etc/glance/glance-api.conf affect +on how glance interacts with Ceph cluster: + + default_store = rbd + rbd_store_ceph_conf = /etc/ceph/ceph.conf + rbd_store_user = glance + rbd_store_pool = images + rbd_store_chunk_size = 8 + + +Nova-compute and Ceph +===================== + +On Controller node, when a VM is booted with command: + + $ nova boot --image ... + +then on Compute note, if "libvirt_images_type = default" (in /etc/nova/nova.conf), +nova-compute will download the specified glance image from Controller node and +stores it locally (on Compute node). If "libvirt_images_type = rbd" then +nova-compute will import the specified glance image into "cinder-volumes" Ceph pool. + +By default, "libvirt_images_type" has value of "default" and it can be changed during +compile time through variable LIBVIRT_IMAGES_TYPE. + +nova-compute underneath uses libvirt to spawn VMs. If Ceph cinder volume is provided +while booting VM with option "--block-device ", then a libvirt secret must be +provided nova-compute to allow libvirt to authenticate with Cephx before libvirt is able +to mount Ceph block device. This libvirt secret is provided through "rbd_secret_uuid" +option in /etc/nova/nova.conf. + +Therefore, on Compute node, if "libvirt_images_type = rbd" then the followings +are required: + + * /etc/ceph/ceph.client.cinder-volume.keyring exist. This file contains + ceph client.cinder-volume's key, so that nova-compute can run some + restricted Ceph command allowed for cinder-volume Ceph client. For example: + + $ rbd -p cinder-backups ls --id cinder-volume + + should fail as "cinder-volume" Ceph client has no permission to touch + "cinder-backups" ceph pool. And the following should work: + + $ rbd -p cinder-volumes ls --id cinder-volume + + * Also, there must be existing a libvirt secret which stores Ceph + client.cinder-volume's key. + +Right now, due to security and the booting order of Controller and Compute nodes, +these 2 requirements are not automatically satisfied at the boot time. + +A script (/etc/ceph/set_nova_compute_cephx.sh) is provided to ease the task of +transferring ceph.client.cinder-volume.keyring from Controller node to Compute +node, and to create libvirt secret. On Controller node, manually runs (just one time): + + $ /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute + +The following configuration options in /etc/glance/glance-api.conf affect +on how nova-compute interacts with Ceph cluster: + + libvirt_images_type = rbd + libvirt_images_rbd_pool=cinder-volumes + libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf + rbd_user=cinder-volume + rbd_secret_uuid= + + +Ceph High Availability +====================== + +Ceph, by design, has strong high availability feature. Each Ceph object +can be replicated and stored into multiple independent physical disk +storages (controlled by Ceph OSD daemons) which are either in the same +machine or in separated machines. + +The number of replication is configurable. In general, the higher the +number of replication, the higher Ceph availability, however, the down +side is the higher physical disk storage space required. + +Also in general, each Ceph object replication should be stored in +different machines so that 1 machine goes down, the other replications +are still available. + +Openstack default Ceph cluster is configured to have 2 replications. +However, these 2 replications are stored into the same machine (which +is Controller node). + + +Build Configuration Options +=========================== + +* Controller build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-controller \ + --enable-kernel=preempt-rt \ + --enable-addons=wr-ovp-openstack,wr-ovp \ + --with-template=feature/openstack-tests \ + --enable-unsupported-config=yes + +* Compute build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-compute \ + --enable-kernel=preempt-rt \ + --enable-addons=wr-ovp-openstack,wr-ovp \ + --enable-unsupported-config=yes + + +Testing Commands and Expected Results +===================================== + +This section describes test steps and expected results to demonstrate that +Ceph is integrated properly into OpenStack + +Please note: the following commands are carried on Controller node, unless +otherwise explicitly indicated. + + $ Start Controller and Compute node in hardware targets + + $ ps aux | grep ceph + +root 2986 0.0 0.0 1059856 22320 ? Sl 02:50 0:08 /usr/bin/ceph-mon -i controller --pid-file /var/run/ceph/mon.controller.pid -c /etc/ceph/ceph.conf +root 3410 0.0 0.2 3578292 153144 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf +root 3808 0.0 0.0 3289468 34428 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c /etc/ceph/ceph.conf + + $ ceph osd lspools + +0 data,1 metadata,2 rbd,3 cinder-volumes,4 cinder-backups,5 images, + + $ neutron net-create mynetwork + $ neutron net-list + ++--------------------------------------+-----------+---------+ +| id | name | subnets | ++--------------------------------------+-----------+---------+ +| 15157fda-0940-4eba-853d-52338ace3362 | mynetwork | | ++--------------------------------------+-----------+---------+ + + $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img + $ nova boot --image myfirstimage --flavor 1 myinstance + $ nova list + ++--------------------------------------+------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+----------+ +| 26c2af98-dc78-465b-a6c2-bb52188d2b42 | myinstance | ACTIVE | - | Running | | ++--------------------------------------+------------+--------+------------+-------------+----------+ + + $ nova delete 26c2af98-dc78-465b-a6c2-bb52188d2b42 + ++----+------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++----+------+--------+------------+-------------+----------+ ++----+------+--------+------------+-------------+----------+ + + $ Modify /etc/glance/glance-api.conf, + to change "default_store = file" to "default_store = rbd", + $ /etc/init.d/glance-api restart + + $ /etc/cinder/add-cinder-volume-types.sh + $ cinder extra-specs-list + ++--------------------------------------+-----------+------------------------------------------+ +| ID | Name | extra_specs | ++--------------------------------------+-----------+------------------------------------------+ +| 4cb4ae4a-600a-45fb-9332-aa72371c5985 | lvm_iscsi | {u'volume_backend_name': u'LVM_iSCSI'} | +| 83b3ee5f-a6f6-4fea-aeef-815169ee83b9 | glusterfs | {u'volume_backend_name': u'GlusterFS'} | +| c1570914-a53a-44e4-8654-fbd960130b8e | cephrbd | {u'volume_backend_name': u'RBD_CEPH'} | +| d38811d4-741a-4a68-afe3-fb5892160d7c | nfs | {u'volume_backend_name': u'Generic_NFS'} | ++--------------------------------------+-----------+------------------------------------------+ + + $ glance image-create --name mysecondimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img + $ glance image-list + ++--------------------------------------+---------------+-------------+------------------+---------+--------+ +| ID | Name | Disk Format | Container Format | Size | Status | ++--------------------------------------+---------------+-------------+------------------+---------+--------+ +| bec1580e-2475-4d1d-8d02-cca53732d17b | myfirstimage | qcow2 | bare | 9761280 | active | +| a223e5f7-a4b5-4239-96ed-a242db2a150a | mysecondimage | qcow2 | bare | 9761280 | active | ++--------------------------------------+---------------+-------------+------------------+---------+--------+ + + $ rbd -p images ls + +a223e5f7-a4b5-4239-96ed-a242db2a150a + + $ cinder create --volume_type lvm_iscsi --image-id a223e5f7-a4b5-4239-96ed-a242db2a150a --display_name=lvm_vol_2 1 + $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 + $ cinder create --volume_type nfs --display_name nfs_vol_1 1 + $ cinder create --volume_type glusterfs --display_name glusterfs_vol_1 1 + $ cinder create --volume_type cephrbd --display_name cephrbd_vol_1 1 + $ cinder list + ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | +| b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | +| c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | +| cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | +| e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ + + $ rbd -p cinder-volumes ls + +volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 + +(This uuid matches with the one in cinder list above) + + $ cinder backup-create e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 + (create a backup for lvm-iscsi volume) + + $ cinder backup-create cea76733-b4ce-4e9a-9bfb-24cc3066070f + (create a backup for nfs volume, this should fails, as nfs volume + does not support volume backup) + + $ cinder backup-create c905b9b1-10cb-413b-a949-c86ff3c1c4c6 + (create a backup for ceph volume) + + $ cinder backup-create b0805546-be7a-4908-b1d5-21202fe6ea79 + (create a backup for gluster volume, this should fails, as glusterfs volume + does not support volume backup) + + $ cinder backup-list + ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ +| ID | Volume ID | Status | Name | Size | Object Count | Container | ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ +| 287502a0-aa4d-4065-93e0-f72fd5c239f5 | cea76733-b4ce-4e9a-9bfb-24cc3066070f | error | None | 1 | None | None | +| 2b0ca8a7-a827-4f1c-99d5-4fb7d9f25b5c | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | None | 1 | None | cinder-backups | +| 32d10c06-a742-45d6-9e13-777767ff5545 | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | None | 1 | None | cinder-backups | +| e2bdf21c-d378-49b3-b5e3-b398964b925c | b0805546-be7a-4908-b1d5-21202fe6ea79 | error | None | 1 | None | None | ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ + + $ rbd -p cinder-backups ls + +volume-0c3f82ea-b3df-414e-b054-7a4977b7e354.backup.94358fed-6bd9-48f1-b67a-4d2332311a1f +volume-219a3250-50b4-4db0-9a6c-55e53041b65e.backup.base + +(There should be only 2 backup volumes in the ceph cinder-backups pool) + + $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls + +2014-03-17 13:03:54.617373 7f8673602780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication +2014-03-17 13:03:54.617378 7f8673602780 0 librados: client.admin initialization error (2) No such file or directory +rbd: couldn't connect to the cluster! + +(This should fails as compute node does not have ceph cinder-volume keyring yet) + + $ /bin/bash /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute + +The authenticity of host 'compute (128.224.149.169)' can't be established. +ECDSA key fingerprint is 6a:79:95:fa:d6:56:0d:72:bf:5e:cb:59:e0:64:f6:7a. +Are you sure you want to continue connecting (yes/no)? yes +Warning: Permanently added 'compute,128.224.149.169' (ECDSA) to the list of known hosts. +root@compute's password: +Run virsh secret-define: +Secret 96dfc68f-3528-4bd0-a226-17a0848b05da created + +Run virsh secret-set-value: +Secret value set + + $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls + +volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 + + $ On Compute node: to allow nova-compute to save glance image into + ceph (by default it saves at the local filesystem /etc/nova/instances) + + modify /etc/nova/nova.conf to change: + + libvirt_images_type = default + + into + + libvirt_images_type = rbd + + $ On Compute node: /etc/init.d/nova-compute restart + + $ nova boot --flavor 1 \ + --image mysecondimage \ + --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ + --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ + --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ + --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ + myinstance + + $ rbd -p cinder-volumes ls + +instance-00000002_disk +volume-219a3250-50b4-4db0-9a6c-55e53041b65e + +(We should see instance-000000xx_disk ceph object) + + $ nova list + ++--------------------------------------+------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+----------+ +| 2a6aeff9-5a35-45a1-b8c4-0730df2a767a | myinstance | ACTIVE | - | Running | | ++--------------------------------------+------------+--------+------------+-------------+----------+ + + $ From dashboard, log into VM console run "cat /proc/partitions" + +Should be able to login and see vdb, vdc, vdd, vdde 1G block devices + + $ nova delete 2a6aeff9-5a35-45a1-b8c4-0730df2a767a + + $ nova list + ++----+------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++----+------+--------+------------+-------------+----------+ ++----+------+--------+------------+-------------+----------+ + + $ rbd -p cinder-volumes ls + +volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 + +(The instance instance-00000010_disk should be gone) + + $ nova boot --flavor 1 \ + --image mysecondimage \ + --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ + --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ + --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ + --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ + myinstance + + $ nova list + ++--------------------------------------+------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+----------+ +| c1866b5f-f731-4d9c-b855-7f82f3fb314f | myinstance | ACTIVE | - | Running | | ++--------------------------------------+------------+--------+------------+-------------+----------+ + + $ From dashboard, log into VM console run "cat /proc/partitions" + +Should be able to login and see vdb, vdc, vdd, vdde 1G block devices + + $ nova delete c1866b5f-f731-4d9c-b855-7f82f3fb314f + $ nova list + ++----+------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++----+------+--------+------------+-------------+----------+ ++----+------+--------+------------+-------------+----------+ + + $ cinder list + ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | +| b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | +| c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | +| cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | +| e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ + +(All the volume should be available) + + $ ceph -s + + cluster 9afd3ca8-50e0-4f71-9fc0-e9034d14adf0 + health HEALTH_OK + monmap e1: 1 mons at {controller=128.224.149.168:6789/0}, election epoch 2, quorum 0 controller + osdmap e22: 2 osds: 2 up, 2 in + pgmap v92: 342 pgs, 6 pools, 9532 kB data, 8 objects + 2143 MB used, 18316 MB / 20460 MB avail + 342 active+clean + +(Should see "health HEALTH_OK" which indicates Ceph cluster is all good) + + $ nova boot --flavor 1 \ + --image myfirstimage \ + --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ + myinstance + +(Booting VM with only existing CephRBD Cinder volume as block device) + + $ nova list + ++--------------------------------------+------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+----------+ +| 4e984fd0-a0af-435f-84a1-ecd6b24b7256 | myinstance | ACTIVE | - | Running | | ++--------------------------------------+------------+--------+------------+-------------+----------+ + + $ From dashboard, log into VM console. Assume that the second partition (CephRbd) + is /dev/vdb + $ On VM, run: "sudo mkfs.ext4 /dev/vdb && sudo mkdir ceph && sudo mount /dev/vdb ceph && sudo chmod 777 -R ceph" + $ On VM, run: "echo "Hello World" > ceph/test.log && dd if=/dev/urandom of=ceph/512M bs=1M count=512 && sync" + $ On VM, run: "cat ceph/test.log && sudo umount ceph" + +Hello World + + $ /etc/init.d/ceph stop osd.0 + $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_1.log" + $ On VM, run: "cat ceph/test*.log && sudo umount ceph" + +Hello World +Hello World + + $ /etc/init.d/ceph start osd.0 + $ Wait until "ceph -s" shows "health HEALTH_OK" + $ /etc/init.d/ceph stop osd.1 + $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_2.log" + $ On VM, run: "cat ceph/test*.log && sudo umount ceph" + +Hello World +Hello World +Hello World + + $ /etc/init.d/ceph stop osd.0 +(Both Ceph OSD daemons are down, so no Ceph Cinder volume available) + + $ On VM, run "sudo mount /dev/vdb ceph" +(Stuck mounting forever, as Ceph Cinder volume is not available) + + $ /etc/init.d/ceph start osd.0 + $ /etc/init.d/ceph start osd.1 + $ On VM, the previous mount should pass + $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_3.log" + $ On VM, run: "cat ceph/test*.log && sudo umount ceph" + +Hello World +Hello World +Hello World +Hello World + + $ nova delete 4e984fd0-a0af-435f-84a1-ecd6b24b7256 + $ cinder list + ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | +| b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | +| c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | +| cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | +| e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ + +(All the volume should be available) + + +Additional References +===================== + +* https://ceph.com/docs/master/rbd/rbd-openstack/ diff --git a/meta-openstack/Documentation/README.heat b/meta-openstack/Documentation/README.heat new file mode 100644 index 0000000..276ad23 --- /dev/null +++ b/meta-openstack/Documentation/README.heat @@ -0,0 +1,549 @@ +Summary +======= + +This document is not intended to provide detail of how Heat in general +works, but rather it describes how Heat is tested to ensure that Heat +is working correctly. + + +Heat Overview +============= + +Heat is template-driven orchestration engine which enables you to orchestrate +multiple composite cloud applications. Heat stack is a set of resources +(including nova compute VMs, cinder volumes, neutron IPs...) which are described +by heat orchestration template (a.k.a HOT) file. + +Heat interacts with Ceilometer to provide autoscaling feature in which when +resources within a stack reach certain watermark (e.g. cpu utilization hits 70%) +then Heat can add or remove resources into or out of that stack to handle +the changing demands. + + +Build Configuration Options +=========================== + +* Controller build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-controller \ + --enable-addons=wr-ovp-openstack,wr-ovp \ + --with-template=feature/openstack-tests + +* Compute build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-compute \ + --enable-addons=wr-ovp-openstack,wr-ovp + + +Test Steps +========== + +Please note: the following commands/steps are carried on Controller +node, unless otherwise explicitly indicated. + + $ Start Controller and Compute node + $ . /etc/nova/openrc + $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img + $ glance image-list ++--------------------------------------+--------------+-------------+------------------+---------+--------+ +| ID | Name | Disk Format | Container Format | Size | Status | ++--------------------------------------+--------------+-------------+------------------+---------+--------+ +| a71b84d3-e656-43f5-afdc-ac67cf4f398a | myfirstimage | qcow2 | bare | 9761280 | active | ++--------------------------------------+--------------+-------------+------------------+---------+--------+ + + $ neutron net-create mynetwork + $ neutron net-list ++--------------------------------------+-----------+---------+ +| id | name | subnets | ++--------------------------------------+-----------+---------+ +| 8c9c8a6f-d90f-479f-82b0-7f6e963b39d7 | mynetwork | | ++--------------------------------------+-----------+---------+ + + $ /etc/cinder/add-cinder-volume-types.sh + $ cinder create --volume_type nfs --display_name nfs_vol_1 3 + $ cinder create --volume_type glusterfs --display_name glusterfs_vol_1 2 + $ cinder list ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | +| c46276fb-e3df-4093-b759-2ce03c4cefd5 | available | glusterfs_vol_1 | 2 | glusterfs | false | | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ + + *** The below test steps are for testing lifecycle management *** + + $ heat stack-create -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=c46276fb-e3df-4093-b759-2ce03c4cefd5" mystack + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+-----------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+-----------------+----------------------+ +| 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | CREATE_COMPLETE | 2014-05-08T16:36:22Z | ++--------------------------------------+------------+-----------------+----------------------+ + + $ nova list ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | +| fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ + +(Should see 2 VMs: vm_1 and vm_2 as defined in two_vms_example.template) + + $ cinder list ++--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ +| 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | +| 58b55eb9-e619-4c2e-bfd8-ebc349aff02b | in-use | mystack-vol_1-5xk2zotoxidr | 1 | lvm_iscsi | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | +| c46276fb-e3df-4093-b759-2ce03c4cefd5 | in-use | glusterfs_vol_1 | 2 | glusterfs | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | ++--------------------------------------+-----------+----------------------------+------+-------------+----------+--------------------------------------+ + +(2 Cinder volumes are attached to vm_1) + + $ From Dashboard, log into vm_1, and run "cat /proc/partitions" +(Should see partitions vdb of 1G and vdc of 2G) + + $ heat stack-show 8d79a777-2e09-4d06-a04e-8430df341514 + +(Command should return lot of info) + + $ heat resource-list mystack ++------------------+------------------------------+-----------------+----------------------+ +| resource_name | resource_type | resource_status | updated_time | ++------------------+------------------------------+-----------------+----------------------+ +| vol_1 | OS::Cinder::Volume | CREATE_COMPLETE | 2014-05-08T16:36:24Z | +| vm_2 | OS::Nova::Server | CREATE_COMPLETE | 2014-05-08T16:36:56Z | +| vm_1 | OS::Nova::Server | CREATE_COMPLETE | 2014-05-08T16:37:26Z | +| vol_2_attachment | OS::Cinder::VolumeAttachment | CREATE_COMPLETE | 2014-05-08T16:37:28Z | +| vol_1_attachment | OS::Cinder::VolumeAttachment | CREATE_COMPLETE | 2014-05-08T16:37:31Z | ++------------------+------------------------------+-----------------+----------------------+ + +(Should see all 5 resources defined in two_vms_example.template) + + $ heat resource-show mystack vm_1 ++------------------------+------------------------------------------------------------------------------------------------------------------------------------+ +| Property | Value | ++------------------------+------------------------------------------------------------------------------------------------------------------------------------+ +| description | | +| links | http://128.224.149.168:8004/v1/088d10de18a84442a7a03497e834e2af/stacks/mystack/8d79a777-2e09-4d06-a04e-8430df341514/resources/vm_1 | +| | http://128.224.149.168:8004/v1/088d10de18a84442a7a03497e834e2af/stacks/mystack/8d79a777-2e09-4d06-a04e-8430df341514 | +| logical_resource_id | vm_1 | +| physical_resource_id | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | +| required_by | vol_1_attachment | +| | vol_2_attachment | +| resource_name | vm_1 | +| resource_status | CREATE_COMPLETE | +| resource_status_reason | state changed | +| resource_type | OS::Nova::Server | +| updated_time | 2014-05-08T16:37:26Z | ++------------------------+------------------------------------------------------------------------------------------------------------------------------------+ + + $ heat template-show mystack + +(The content of this command should be the same as content of two_vms_example.template) + + $ heat event-list mystack ++------------------+-----+------------------------+--------------------+----------------------+ +| resource_name | id | resource_status_reason | resource_status | event_time | ++------------------+-----+------------------------+--------------------+----------------------+ +| vm_1 | 700 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | +| vm_1 | 704 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:26Z | +| vm_2 | 699 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | +| vm_2 | 703 | state changed | CREATE_COMPLETE | 2014-05-08T16:36:56Z | +| vol_1 | 701 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:36:22Z | +| vol_1 | 702 | state changed | CREATE_COMPLETE | 2014-05-08T16:36:24Z | +| vol_1_attachment | 706 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:37:27Z | +| vol_1_attachment | 708 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:31Z | +| vol_2_attachment | 705 | state changed | CREATE_IN_PROGRESS | 2014-05-08T16:37:26Z | +| vol_2_attachment | 707 | state changed | CREATE_COMPLETE | 2014-05-08T16:37:28Z | ++------------------+-----+------------------------+--------------------+----------------------+ + + $ heat action-suspend mystack + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+------------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+------------------+----------------------+ +| 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | SUSPEND_COMPLETE | 2014-05-08T16:36:22Z | ++--------------------------------------+------------+------------------+----------------------+ + + $ nova list ++--------------------------------------+---------------------------+-----------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------+-----------+------------+-------------+----------+ +| e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | SUSPENDED | - | Shutdown | | +| fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | SUSPENDED | - | Shutdown | | ++--------------------------------------+---------------------------+-----------+------------+-------------+----------+ + +(2 VMs are in suspeded mode) + + $ Wait for 2 minutes + $ heat action-resume mystack + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+-----------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+-----------------+----------------------+ +| 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | RESUME_COMPLETE | 2014-05-08T16:36:22Z | ++--------------------------------------+------------+-----------------+----------------------+ + + $ nova list ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | +| fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ + + $ nova show e58da67a-9a22-4db2-9d1e-0a1810df2f2e | grep flavor +| flavor | m1.tiny (1) + +(Should see m1.tiny flavour for vm_2 by default) + + $ heat stack-update -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=c46276fb-e3df-4093-b759-2ce03c4cefd5;vm_type=m1.small" mystack +(Update vm_2 flavour from m1.tiny to m1.small) + + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+-----------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+-----------------+----------------------+ +| 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | UPDATE_COMPLETE | 2014-05-08T16:36:22Z | ++--------------------------------------+------------+-----------------+----------------------+ + + $ nova list ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ +| e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | +| fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | ++--------------------------------------+---------------------------+--------+------------+-------------+----------+ + + $ nova show e58da67a-9a22-4db2-9d1e-0a1810df2f2e | grep flavor +| flavor | m1.small (2) | + +(Should see m1.small flavour for vm_2. This demonstrates that its able to update) + + $ heat stack-create -f /etc/heat/templates/two_vms_example.template -P "vol_2_id=1d283cf7-2258-4038-a31b-5cb5fba995f3" mystack2 + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+-----------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+-----------------+----------------------+ +| 8d79a777-2e09-4d06-a04e-8430df341514 | mystack | UPDATE_COMPLETE | 2014-05-08T16:36:22Z | +| 258e05fd-370b-4017-818a-4bb573ac3982 | mystack2 | CREATE_COMPLETE | 2014-05-08T16:46:12Z | ++--------------------------------------+------------+-----------------+----------------------+ + + $ nova list ++--------------------------------------+----------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+----------------------------+--------+------------+-------------+----------+ +| e58da67a-9a22-4db2-9d1e-0a1810df2f2e | mystack-vm_1-j5tq5m2ppk3z | ACTIVE | - | Running | | +| fe1b693e-db8c-4d74-b840-7dae8acef0c1 | mystack-vm_2-oiu6mzg3jjv2 | ACTIVE | - | Running | | +| 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | mystack2-vm_1-hmy5feakes2q | ACTIVE | - | Running | | +| c80aaea9-276f-4bdb-94d5-9e0c5434f269 | mystack2-vm_2-6b4nzxecda3j | ACTIVE | - | Running | | ++--------------------------------------+----------------------------+--------+------------+-------------+----------+ + + $ cinder list ++--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ +| 1d283cf7-2258-4038-a31b-5cb5fba995f3 | in-use | nfs_vol_1 | 3 | nfs | false | 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | +| 2175db16-5569-45eb-af8b-2b967e2b808c | in-use | mystack2-vol_1-ey6udylsdqbo | 1 | lvm_iscsi | false | 181931ae-4c3b-488d-a7c6-4c4b08f6a35b | +| 58b55eb9-e619-4c2e-bfd8-ebc349aff02b | in-use | mystack-vol_1-5xk2zotoxidr | 1 | lvm_iscsi | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | +| c46276fb-e3df-4093-b759-2ce03c4cefd5 | in-use | glusterfs_vol_1 | 2 | glusterfs | false | e58da67a-9a22-4db2-9d1e-0a1810df2f2e | ++--------------------------------------+--------+-----------------------------+------+-------------+----------+--------------------------------------+ + + $ Wait for 5 minutes + $ heat stack-delete mystack2 + $ heat stack-delete mystack + $ Keep running "heat stack-list" until seeing ++----+------------+--------------+---------------+ +| id | stack_name | stack_status | creation_time | ++----+------------+--------------+---------------+ ++----+------------+--------------+---------------+ + + $ nova list ++----+------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++----+------+--------+------------+-------------+----------+ ++----+------+--------+------------+-------------+----------+ + + $ cinder list ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ +| 1d283cf7-2258-4038-a31b-5cb5fba995f3 | available | nfs_vol_1 | 3 | nfs | false | | +| c46276fb-e3df-4093-b759-2ce03c4cefd5 | available | glusterfs_vol_1 | 2 | glusterfs | false | | ++--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ + + *** The below test steps are for testing autoscaling *** + + $ By default, Ceilometer data samples are collected every 10 minutes, therefore + to speed up the test process, change the data simple polling rates at every + 1 minutes. On Controller node modify /etc/ceilometer/pipeline.yaml to change + all value "600" to "60". + + *** The following commands are for reseting Ceilometer database *** + + $ /etc/init.d/ceilometer-agent-notification stop + $ /etc/init.d/ceilometer-agent-central stop + $ /etc/init.d/ceilometer-alarm-evaluator stop + $ /etc/init.d/ceilometer-alarm-notifier stop + $ /etc/init.d/ceilometer-api stop + $ /etc/init.d/ceilometer-collector stop + $ sudo -u postgres psql -c "DROP DATABASE ceilometer" + $ sudo -u postgres psql -c "CREATE DATABASE ceilometer" + $ ceilometer-dbsync + $ /etc/init.d/ceilometer-agent-central restart + $ /etc/init.d/ceilometer-agent-notification restart + $ /etc/init.d/ceilometer-alarm-evaluator restart + $ /etc/init.d/ceilometer-alarm-notifier restart + $ /etc/init.d/ceilometer-api restart + $ /etc/init.d/ceilometer-collector restart + + $ On Compute node, modify /etc/ceilometer/pipeline.yaml to change all + value "600" to "60", and run: + + /etc/init.d/ceilometer-agent-compute restart + + $ heat stack-create -f /etc/heat/templates/autoscaling_example.template mystack + $ Keep running "heat stack-list" until seeing ++--------------------------------------+------------+-----------------+----------------------+ +| id | stack_name | stack_status | creation_time | ++--------------------------------------+------------+-----------------+----------------------+ +| 1905a541-edfa-4a1f-b6f9-eacdf5f62d85 | mystack | CREATE_COMPLETE | 2014-05-06T22:46:08Z | ++--------------------------------------+------------+-----------------+----------------------+ + + $ ceilometer alarm-list ++--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | insufficient data | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | insufficient data | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------------------+---------+------------+---------------------------------+------------------+ + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 39a9cb57-46f0-4f8a-885d-9a645aa467b1 | mystack-cpu_alarm_low-jrjmwxbhzdd3 | alarm | True | True | cpu_util < 15.0 during 1 x 120s | None | +| 5dd5747e-a7a7-4ccd-994b-b0b8733728de | mystack-cpu_alarm_high-tpgsvsbcskjt | ok | True | True | cpu_util > 80.0 during 1 x 180s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ Wait for 5 minutes + $ ceilometer alarm-list ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 39a9cb57-46f0-4f8a-885d-9a645aa467b1 | mystack-cpu_alarm_low-jrjmwxbhzdd3 | alarm | True | True | cpu_util < 15.0 during 1 x 120s | None | +| 5dd5747e-a7a7-4ccd-994b-b0b8733728de | mystack-cpu_alarm_high-tpgsvsbcskjt | ok | True | True | cpu_util > 80.0 during 1 x 180s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ On Dashboard, log into 49201b33-1cd9-4e2f-b182-224f29c2bb7c VM console and run + + while [ true ]; do echo "hello world"; done + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | alarm | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | +| a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | BUILD | spawning | NOSTATE | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + +(Should see that now heat auto scales up by creating another VM) + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | +| a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + +(Both alarm should be in "ok" state as average cpu_util for 2 VMs should be around 50% +which is in the range) + + $ On Dashboard, log into the a1a21353-3400-46be-b75c-8c6a0a74a1de VM console and run + + while [ true ]; do echo "hello world"; done + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | alarm | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | +| a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | +| 88e88660-acde-4998-a2e3-312efbc72447 | mystack-server_group-etzglbdncptt-server_group-2-jaotqnr5x7xb | BUILD | spawning | NOSTATE | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + +(Should see that now heat auto scales up by creating another VM) + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | +| a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | +| 88e88660-acde-4998-a2e3-312efbc72447 | mystack-server_group-etzglbdncptt-server_group-2-jaotqnr5x7xb | ACTIVE | - | Running | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ Wait for 5 minutes + $ ceilometer alarm-list ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ On Dashboard, log into the a1a21353-3400-46be-b75c-8c6a0a74a1de VM console and + stop the while loop + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | +| a1a21353-3400-46be-b75c-8c6a0a74a1de | mystack-server_group-etzglbdncptt-server_group-1-wzspkkrb4qay | ACTIVE | - | Running | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + +(Heat scales down the VMs by one as cpu_alarm_low is triggered) + + $ Wait for 5 minutes + $ ceilometer alarm-list ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | ok | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ On Dashboard, log into the 49201b33-1cd9-4e2f-b182-224f29c2bb7c VM console and + stop the while loop + + $ Keep running "ceilometer alarm-list" until seeing ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ nova list ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ +| 49201b33-1cd9-4e2f-b182-224f29c2bb7c | mystack-server_group-etzglbdncptt-server_group-0-moxmbfm4pynk | ACTIVE | - | Running | | ++--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+----------+ + +(Heat scales down the VMs by one as cpu_alarm_low is triggered) + + $ Wait for 5 minutes + $ ceilometer alarm-list ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ +| 14a1c510-667c-4717-a49d-eba375e48e01 | mystack-cpu_alarm_low-3jmjeyqrwd3k | alarm | True | False | cpu_util < 45.0 during 1 x 120s | None | +| edcc3c0c-84b7-4694-8312-72677bc50efd | mystack-cpu_alarm_high-jiziioj5zt6r | ok | True | False | cpu_util > 80.0 during 1 x 300s | None | ++--------------------------------------+-------------------------------------+-------+---------+------------+---------------------------------+------------------+ + + $ heat stack-delete 1905a541-edfa-4a1f-b6f9-eacdf5f62d85 + $ heat stack-list ++----+------------+--------------+---------------+ +| id | stack_name | stack_status | creation_time | ++----+------------+--------------+---------------+ ++----+------------+--------------+---------------+ + + $ nova list ++----+------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++----+------+--------+------------+-------------+----------+ ++----+------+--------+------------+-------------+----------+ + + $ ceilometer alarm-list ++----------+------+-------+---------+------------+-----------------+------------------+ +| Alarm ID | Name | State | Enabled | Continuous | Alarm condition | Time constraints | ++----------+------+-------+---------+------------+-----------------+------------------+ ++----------+------+-------+---------+------------+-----------------+------------------+ + + +Heat Built-In Unit Tests +========================= + +This section describes how to run Heat and Heat client built-in unit +tests which are located at: + + /usr/lib64/python2.7/site-packages/heat/tests + /usr/lib64/python2.7/site-packages/heatclient/tests + +To run heat built-in unit test with nosetests: + + $ cd /usr/lib64/python2.7/site-packages/heat + $ nosetests -v tests + +---------------------------------------------------------------------- +Ran 1550 tests in 45.770s + +OK + +To run heatclient built-in unit test with nosetests: + + $ cd /usr/lib64/python2.7/site-packages/heatclient/tests + $ python -v -m subunit.run discover . | subunit2pyunit + +---------------------------------------------------------------------- +Ran 272 tests in 3.368s + +OK + +Please note that, Python testrunner subunit.run is used instead of +nosetests as nosetests is not compatible with testscenarios test +framework used by some of the heatclient unit tests. diff --git a/meta-openstack/Documentation/README.swift b/meta-openstack/Documentation/README.swift new file mode 100644 index 0000000..8429a2e --- /dev/null +++ b/meta-openstack/Documentation/README.swift @@ -0,0 +1,444 @@ +Summary +======= + +This document is not intended to provide detail of how Swift in general +works, but rather it highlights the details of how Swift cluster is +setup and OpenStack is configured to allow various Openstack components +interact with Swift. + + +Swift Overview +============== + +Openstack Swift is an object storage service. Clients can access swift +objects through RESTful APIs. Swift objects can be grouped into a +"container" in which containers are grouped into "account". Each account +or container in Swift cluster is represented by a SQLite database which +contains information related to that account or container. Each +Swift object is just user data input. + +Swift cluster can include a massive number of storage devices. Any Swift +storage device can be configured to store container databases and/or +account databases and/or objects. Each Swift account database can be +identified by tuple (account name). Each Swift container database can +be identified by tuple (account name, container name). Each swift object +can be identified by tuple (account name, container name, object name). + +Swift uses "ring" static mapping algorithm to identify what storage device +hosting account database, container database, or object contain (similar +to Ceph uses Crush algorithm to identify what OSD hosting Ceph object). +A Swift cluster has 3 rings (account ring, container ring, and object ring) +used for finding location of account database, container database, or +object file respectively. + +Swift service includes the following core services: proxy-server which +provides the RESTful APIs for Swift clients to access; account-server +which manages accounts; container-server which manages containers; +and object-server which manages objects. + + +Swift Cluster Setup +=================== + +The Swift default cluster is setup to have the followings: + +* All Swift main process services including proxy-server, account-server + container-server, object-server run on Controller node. +* 3 zones in which each zone has only 1 storage device. + The underneath block devices for these 3 storage devices are loopback + devices. The size of the backing up loopback files is 2Gbytes by default + and can be changed at compile time through variable SWIFT_BACKING_FILE_SIZE. + If SWIFT_BACKING_FILE_SIZE="0G" then is for disabling loopback devices + and using local filesystem as Swift storage device. +* All 3 Swift rings have 2^12 partitions and 2 replicas. + +The Swift default cluster is mainly for demonstration purpose. One might +wants to have a different Swift cluster setup than this setup (e.g. using +real hardware block device instead of loopback devices). + +The script /etc/swift/swift_setup.sh is provided to ease the task of setting +up a complicated Swift cluster. It reads a cluster config file, which describes +what storage devices are included in what rings, and constructs the cluster. + +For details of how to use swift_setup.sh and the format of Swift cluster +config file please refer to the script's help: + + $ swift_setup.sh + + +Glance and Swift +================ + +Glance can store images into Swift cluster when "default_store = swift" +is set in /etc/glance/glance-api.conf. + +By default "default_store" has value of "file" which tells Glance to +store images into local filesystem. "default_store" value can be set +during compile time through variable GLANCE_DEFAULT_STORE. + +The following configuration options in /etc/glance/glance-api.conf affect +on how glance interacts with Swift cluster: + + swift_store_auth_version = 2 + swift_store_auth_address = http://127.0.0.1:5000/v2.0/ + swift_store_user = service:glance + swift_store_key = password + swift_store_container = glance + swift_store_create_container_on_put = True + swift_store_large_object_size = 5120 + swift_store_large_object_chunk_size = 200 + swift_enable_snet = False + +With these default settings, the images will be stored under +Swift account: "service" tenant ID and Swift cluster container: +"glance". + + +Cinder Backup and Swift +======================= + +Cinder-backup has ability to store volume backup into Swift +cluster with the following command: + + $ cinder backup-create + +where is ID of an existing Cinder volume if +the configure option "backup_driver" in /etc/cinder/cinder.conf +is set to "cinder.backup.drivers.ceph". + +Cinder-backup is not be able to create a backup for any cinder +volume which backed by NFS or Glusterfs. This is because NFS +and Gluster cinder-volume backend drivers do not support the +backup functionality. In other words, only cinder volume +backed by lvm-iscsi and ceph-rbd are able to be backed-up +by cinder-backup. + +The following configuration options in /etc/cinder/cinder.conf affect +on how cinder-backup interacts with Swift cluster: + + backup_swift_url=http://controller:8888/v1/AUTH_ + backup_swift_auth=per_user + #backup_swift_user= + #backup_swift_key= + backup_swift_container=cinder-backups + backup_swift_object_size=52428800 + backup_swift_retry_attempts=3 + backup_swift_retry_backoff=2 + backup_compression_algorithm=zlib + +With these defaults settings, the tenant ID of the keystone user that +runs "cinder backup-create" command will be used as Swift cluster +account name, along with "cinder-backups" Swift cluster container name +in which the volume backups will be saved into. + + +Build Configuration Options +=========================== + +* Controller build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-controller \ + --enable-addons=wr-ovp-openstack,wr-ovp \ + --with-template=feature/openstack-tests \ + --with-layer=meta-cloud-services/meta-openstack-swift-deploy + +* Compute build config options: + + --enable-board=intel-xeon-core \ + --enable-rootfs=ovp-openstack-compute \ + --enable-addons=wr-ovp-openstack,wr-ovp + + +Test Steps +========== + +This section describes test steps and expected results to demonstrate that +Swift is integrated properly into OpenStack. + +Please note: the following commands are carried on Controller node, unless +otherwise explicitly indicated. + + $ Start Controller and Compute node + $ . /etc/nova/openrc + $ dd if=/dev/urandom of=50M_c1.org bs=1M count=50 + $ dd if=/dev/urandom of=50M_c2.org bs=1M count=50 + $ dd if=/dev/urandom of=100M_c2.org bs=1M count=100 + $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org && swift upload c2 100M_c2.org + $ swift list + +c1 +c2 + + $ swift stat c1 + + Account: AUTH_4ebc0e00338f405c9267866c6b984e71 + Container: c1 + Objects: 1 + Bytes: 52428800 + Read ACL: + Write ACL: + Sync To: + Sync Key: + Accept-Ranges: bytes + X-Timestamp: 1396457818.76909 + X-Trans-Id: tx0564472425ad47128b378-00533c41bb + Content-Type: text/plain; charset=utf-8 + +(Should see there is 1 object) + + $ swift stat c2 + +root@controller:~# swift stat c2 + Account: AUTH_4ebc0e00338f405c9267866c6b984e71 + Container: c2 + Objects: 2 + Bytes: 157286400 + Read ACL: + Write ACL: + Sync To: + Sync Key: + Accept-Ranges: bytes + X-Timestamp: 1396457826.26262 + X-Trans-Id: tx312934d494a44bbe96a00-00533c41cd + Content-Type: text/plain; charset=utf-8 + +(Should see there are 2 objects) + + $ swift stat c3 + +Container 'c3' not found + + $ mv 50M_c1.org 50M_c1.save && mv 50M_c2.org 50M_c2.save && mv 100M_c2.org 100M_c2.save + $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org + $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org + +a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.save +a8f7d671e35fcf20b87425fb39bdaf05 50M_c1.org +353233ed20418dbdeeb2fad91ba4c86a 50M_c2.save +353233ed20418dbdeeb2fad91ba4c86a 50M_c2.org +3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.save +3b7cbb444c2ba93819db69ab3584f4bd 100M_c2.org + +(The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) + + $ swift delete c1 50M_c1.org && swift delete c2 50M_c2.org + $ swift stat c1 + + Account: AUTH_4ebc0e00338f405c9267866c6b984e71 + Container: c1 + Objects: 0 + Bytes: 0 + Read ACL: + Write ACL: + Sync To: + Sync Key: + Accept-Ranges: bytes + X-Timestamp: 1396457818.77284 + X-Trans-Id: tx58e4bb6d06b84276b8d7f-00533c424c + Content-Type: text/plain; charset=utf-8 + +(Should see there is no object) + + $ swift stat c2 + + Account: AUTH_4ebc0e00338f405c9267866c6b984e71 + Container: c2 + Objects: 1 + Bytes: 104857600 + Read ACL: + Write ACL: + Sync To: + Sync Key: + Accept-Ranges: bytes + X-Timestamp: 1396457826.25872 + X-Trans-Id: txdae8ab2adf4f47a4931ba-00533c425b + Content-Type: text/plain; charset=utf-8 + +(Should see there is 1 object) + + $ swift upload c1 50M_c1.org && swift upload c2 50M_c2.org + $ rm *.org + $ swift download c1 50M_c1.org && swift download c2 50M_c2.org && swift download c2 100M_c2.org + $ md5sum 50M_c1.save 50M_c1.org && md5sum 50M_c2.save 50M_c2.org && md5sum 100M_c2.save 100M_c2.org + +31147c186e7dd2a4305026d3d6282861 50M_c1.save +31147c186e7dd2a4305026d3d6282861 50M_c1.org +b9043aacef436dfbb96c39499d54b850 50M_c2.save +b9043aacef436dfbb96c39499d54b850 50M_c2.org +b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.save +b1a1b6b34a6852cdd51cd487a01192cc 100M_c2.org + +(The md5sums of each pair "zzz.save" and "zzz.org" files must be the same) + + $ neutron net-create mynetwork + $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img + $ glance image-list + ++--------------------------------------+--------------+-------------+------------------+---------+--------+ +| ID | Name | Disk Format | Container Format | Size | Status | ++--------------------------------------+--------------+-------------+------------------+---------+--------+ +| 79f52103-5b22-4aa5-8159-2d146b82b0b2 | myfirstimage | qcow2 | bare | 9761280 | active | ++--------------------------------------+--------------+-------------+------------------+---------+--------+ + + $ export OS_TENANT_NAME=service && export OS_USERNAME=glance + $ swift list glance + +79f52103-5b22-4aa5-8159-2d146b82b0b2 + +(The object name in the "glance" container must be the same as glance image id just created) + + $ swift download glance 79f52103-5b22-4aa5-8159-2d146b82b0b2 + $ md5sum 79f52103-5b22-4aa5-8159-2d146b82b0b2 /root/images/cirros-0.3.0-x86_64-disk.img + +50bdc35edb03a38d91b1b071afb20a3c 79f52103-5b22-4aa5-8159-2d146b82b0b2 +50bdc35edb03a38d91b1b071afb20a3c /root/images/cirros-0.3.0-x86_64-disk.img + +(The md5sum of these 2 files must be the same) + + $ ls /etc/glance/images/ +(This should be empty) + + $ . /etc/nova/openrc + $ nova boot --image myfirstimage --flavor 1 myinstance + $ nova list + ++--------------------------------------+------------+--------+------------+-------------+----------+ +| ID | Name | Status | Task State | Power State | Networks | ++--------------------------------------+------------+--------+------------+-------------+----------+ +| bc9662a0-0dac-4bff-a7fb-b820957c55a4 | myinstance | ACTIVE | - | Running | | ++--------------------------------------+------------+--------+------------+-------------+----------+ + + $ From dashboard, log into VM console to make sure the VM is really running + $ nova delete bc9662a0-0dac-4bff-a7fb-b820957c55a4 + $ glance image-delete 79f52103-5b22-4aa5-8159-2d146b82b0b2 + $ export OS_TENANT_NAME=service && export OS_USERNAME=glance + $ swift list glance + +(Should be empty) + + $ . /etc/nova/openrc && . /etc/cinder/add-cinder-volume-types.sh + $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 + $ cinder list + ++--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ +| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | ++--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ +| 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | lvm_vol_1 | 1 | lvm_iscsi | false | | ++--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ + + $ cinder backup-create 3e388ae0-2e20-42a2-80da-3f9f366cbaed + $ cinder backup-list + ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ +| ID | Volume ID | Status | Name | Size | Object Count | Container | ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ +| 1444f5d0-3a87-40bc-a7a7-f3c672768b6a | 3e388ae0-2e20-42a2-80da-3f9f366cbaed | available | None | 1 | 22 | cinder-backups | ++--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ + + $ swift list + +c1 +c2 +cinder-backups + +(Should see new Swift container "cinder-backup") + + $ swift list cinder-backups + +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata + + $ reboot + $ . /etc/nova/openrc && swift list cinder-backups + +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00001 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00002 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00003 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00004 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00005 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00006 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00007 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00008 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00009 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00010 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00011 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00012 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00013 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00014 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00015 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00016 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00017 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00018 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00019 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00020 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a-00021 +volume_3e388ae0-2e20-42a2-80da-3f9f366cbaed/20140402193029/az_nova_backup_1444f5d0-3a87-40bc-a7a7-f3c672768b6a_metadata + + $ cinder backup-delete 1444f5d0-3a87-40bc-a7a7-f3c672768b6a + $ swift list cinder-backups + +(Should be empty) + + +Swift Built-In Unit Tests +========================= + +This section describes how to run Swift and Swift client built-in unit +tests which are located at: + + /usr/lib64/python2.7/site-packages/swift/test + /usr/lib64/python2.7/site-packages/swiftclient/tests + +with nosetests test-runner. Please make sure that the test accounts +setting in /etc/swift/test.conf reflects the keystone user accounts +setting. + +To run swift built-in unit test with nosetests: + + $ To accommodate the small space of loop dev, + modify /etc/swift/swift.conf to have "max_file_size = 5242880" + $ /etc/init.d/swift restart + $ cd /usr/lib64/python2.7/site-packages/swift + $ nosetests -v test + +Ran 1633 tests in 272.930s + +FAILED (errors=5, failures=4) + +To run swiftclient built-in unit test with nosetests: + + $ cd /usr/lib64/python2.7/site-packages/swiftclient + $ nosetests -v tests + +Ran 108 tests in 2.277s + +FAILED (failures=1) + + +References +========== + +* http://docs.openstack.org/developer/swift/deployment_guide.html +* http://docs.openstack.org/grizzly/openstack-compute/install/yum/content/ch_installing-openstack-object-storage.html +* https://swiftstack.com/openstack-swift/architecture/ diff --git a/meta-openstack/Documentation/testsystem/README b/meta-openstack/Documentation/testsystem/README new file mode 100644 index 0000000..ddbc51d --- /dev/null +++ b/meta-openstack/Documentation/testsystem/README @@ -0,0 +1,116 @@ +OpenStack: Minimally Viable Test System + +Usage: +