diff options
| author | Amy Fong <amy.fong@windriver.com> | 2014-05-21 14:35:15 -0400 |
|---|---|---|
| committer | Bruce Ashfield <bruce.ashfield@windriver.com> | 2014-05-23 23:42:55 -0400 |
| commit | fb1d6f23fa01c0217ed3f6778d8033dd0030db2a (patch) | |
| tree | 36dc89d6b66050a56cbca2f2f7c90229ebcb8854 /meta-openstack/Documentation/README.ceph-openstack | |
| parent | 6350b155270f7f086624db36ecc6e6008ebcd378 (diff) | |
| download | meta-cloud-services-fb1d6f23fa01c0217ed3f6778d8033dd0030db2a.tar.gz | |
Testing documentation
Add documentation for testing swift, ceph, heat.
Create a script and instructions on a script that launches a controller
and a specified number of compute nodes.
Signed-off-by: Amy Fong <amy.fong@windriver.com>
Diffstat (limited to 'meta-openstack/Documentation/README.ceph-openstack')
| -rw-r--r-- | meta-openstack/Documentation/README.ceph-openstack | 592 |
1 files changed, 592 insertions, 0 deletions
diff --git a/meta-openstack/Documentation/README.ceph-openstack b/meta-openstack/Documentation/README.ceph-openstack new file mode 100644 index 0000000..8f11f2d --- /dev/null +++ b/meta-openstack/Documentation/README.ceph-openstack | |||
| @@ -0,0 +1,592 @@ | |||
| 1 | Summary | ||
| 2 | ======= | ||
| 3 | |||
| 4 | This document is not intended to provide detail of how Ceph in general works | ||
| 5 | (please refer to addons/wr-ovp/layers/ovp/Documentation/README_ceph.pdf | ||
| 6 | document for such a detail), but rather, it highlights the details of how | ||
| 7 | Ceph cluster is setup and OpenStack is configured to allow various Openstack | ||
| 8 | components interact with Ceph. | ||
| 9 | |||
| 10 | |||
| 11 | Ceph Cluster Setup | ||
| 12 | ================== | ||
| 13 | |||
| 14 | By default Ceph cluster is setup to have the followings: | ||
| 15 | |||
| 16 | * Ceph monitor daemon running on Controller node | ||
| 17 | * Two Ceph OSD (osd.0 and osd.1) daemons running on Controller node. | ||
| 18 | The underneath block devices for these 2 OSDs are loopback block devices. | ||
| 19 | The size of the backing up loopback files is 10Gbytes by default and can | ||
| 20 | be changed at compile time through variable CEPH_BACKING_FILE_SIZE. | ||
| 21 | * No Ceph MSD support | ||
| 22 | * Cephx authentication is enabled | ||
| 23 | |||
| 24 | This is done through script /etc/init.d/ceph-setup. This script is | ||
| 25 | run when system is booting up. Therefore, Ceph cluster should ready | ||
| 26 | for use after booting, and no additional manual step is required. | ||
| 27 | |||
| 28 | With current Ceph setup, only Controller node is able to run Ceph | ||
| 29 | commands which requires Ceph admin installed (file /etc/ceph/ceph.client.admin.keyring | ||
| 30 | exists). If it's desired to have node other than Controller (e.g. Compute node) | ||
| 31 | to be able to run Ceph command, then keyring for at a particular Ceph client | ||
| 32 | must be created and transfered from Controller node to that node. There is a | ||
| 33 | convenient tool for doing so in a secure manner. On Controller node, run: | ||
| 34 | |||
| 35 | $ /etc/ceph/ceph_xfer_keyring.sh -h | ||
| 36 | $ /etc/ceph/ceph_xfer_keyring.sh <key name> <remote login> [remote location] | ||
| 37 | |||
| 38 | The way Ceph cluster is setup is mainly for demonstration purpose. One might | ||
| 39 | wants to have a different Ceph cluster setup than this setup (e.g. using real | ||
| 40 | hardware block devivce instead of loopback devices). | ||
| 41 | |||
| 42 | |||
| 43 | Setup Ceph's Pool and Client Users To Be Used By OpenStack | ||
| 44 | ========================================================== | ||
| 45 | |||
| 46 | After Ceph cluster is up and running, some specific Ceph pools and | ||
| 47 | Ceph client users must be created in order for Openstack components | ||
| 48 | to be able to use Ceph. | ||
| 49 | |||
| 50 | * Openstack cinder-volume component requires "cinder-volumes" pool | ||
| 51 | and "cinder-volume" client exist. | ||
| 52 | * Openstack cinder-backups component requires "cinder-backups" pool | ||
| 53 | and "cinder-backup" client exist | ||
| 54 | * Openstack Glance component requires "images" pool and "glance" | ||
| 55 | client exist | ||
| 56 | * Openstack nova-compute component required "cinder-volumes" pool | ||
| 57 | and "cinder-volume" client exist. | ||
| 58 | |||
| 59 | After system is booted up, all of these required pools and clients | ||
| 60 | are created automatically through script /etc/ceph/ceph-openstack-setup.sh. | ||
| 61 | |||
| 62 | |||
| 63 | Cinder-volume and Ceph | ||
| 64 | ====================== | ||
| 65 | |||
| 66 | Cinder-volume supports multiple backends including Ceph Rbd. When a volume | ||
| 67 | is created with "--volume_type cephrbd" | ||
| 68 | |||
| 69 | $ cinder create --volume_type cephrbd --display_name glusterfs_vol_1 1 | ||
| 70 | |||
| 71 | where "cephrbd" type can be created as following: | ||
| 72 | |||
| 73 | $ cinder type-create cephrbd | ||
| 74 | $ cinder type-key cephrbd set volume_backend_name=RBD_CEPH | ||
| 75 | |||
| 76 | then Cinder-volume Ceph backend driver will store volume into Ceph's pool | ||
| 77 | named "cinder-volumes". | ||
| 78 | |||
| 79 | On controller node, to list what is in "cinder-volumes" pool: | ||
| 80 | |||
| 81 | $ rbd -p cinder-volumes ls | ||
| 82 | volume-b5294a0b-5c92-4b2f-807e-f49c5bc1896b | ||
| 83 | |||
| 84 | The following configuration options in /etc/cinder/cinder.conf | ||
| 85 | affect on how cinder-volume interact with Ceph cluster through | ||
| 86 | cinder-volume ceph backend | ||
| 87 | |||
| 88 | volume_driver=cinder.volume.drivers.rbd.RBDDriver | ||
| 89 | rbd_pool=cinder-volumes | ||
| 90 | rbd_ceph_conf=/etc/ceph/ceph.conf | ||
| 91 | rbd_flatten_volume_from_snapshot=false | ||
| 92 | rbd_max_clone_depth=5 | ||
| 93 | rbd_user=cinder-volume | ||
| 94 | volume_backend_name=RBD_CEPH | ||
| 95 | |||
| 96 | |||
| 97 | Cinder-backup and Ceph | ||
| 98 | ====================== | ||
| 99 | |||
| 100 | Cinder-backup has ability to store volume backup into Ceph | ||
| 101 | "volume-backups" pool with the following command: | ||
| 102 | |||
| 103 | $ cinder backup-create <cinder volume ID> | ||
| 104 | |||
| 105 | where <cinder volume ID> is ID of an existing Cinder volume. | ||
| 106 | |||
| 107 | Cinder-backup is not be able to create a backup for any cinder | ||
| 108 | volume which backed by NFS or Glusterfs. This is because NFS | ||
| 109 | and Gluster cinder-volume backend drivers do not support the | ||
| 110 | backup functionality. In other words, only cinder volume | ||
| 111 | backed by lvm-iscsi and ceph-rbd are able to be backed-up | ||
| 112 | by cinder-backup. | ||
| 113 | |||
| 114 | On controller node, to list what is in "cinder-backups" pool: | ||
| 115 | |||
| 116 | $ rbd -p "cinder-backups" ls | ||
| 117 | |||
| 118 | The following configuration options in /etc/cinder/cinder.conf affect | ||
| 119 | on how cinder-backup interacts with Ceph cluster: | ||
| 120 | |||
| 121 | backup_driver=cinder.backup.drivers.ceph | ||
| 122 | backup_ceph_conf=/etc/ceph/ceph.conf | ||
| 123 | backup_ceph_user=cinder-backup | ||
| 124 | backup_ceph_chunk_size=134217728 | ||
| 125 | backup_ceph_pool=cinder-backups | ||
| 126 | backup_ceph_stripe_unit=0 | ||
| 127 | backup_ceph_stripe_count=0 | ||
| 128 | restore_discard_excess_bytes=true | ||
| 129 | |||
| 130 | |||
| 131 | Glance and Ceph | ||
| 132 | =============== | ||
| 133 | |||
| 134 | Glance can store images into Ceph pool "images" when "default_store = rbd" | ||
| 135 | is set in /etc/glance/glance-api.conf. | ||
| 136 | |||
| 137 | By default "default_store" has value of "file" which tells Glance to | ||
| 138 | store images into local filesystem. "default_store" value can be set | ||
| 139 | during compile time through variable GLANCE_DEFAULT_STORE. | ||
| 140 | |||
| 141 | The following configuration options in /etc/glance/glance-api.conf affect | ||
| 142 | on how glance interacts with Ceph cluster: | ||
| 143 | |||
| 144 | default_store = rbd | ||
| 145 | rbd_store_ceph_conf = /etc/ceph/ceph.conf | ||
| 146 | rbd_store_user = glance | ||
| 147 | rbd_store_pool = images | ||
| 148 | rbd_store_chunk_size = 8 | ||
| 149 | |||
| 150 | |||
| 151 | Nova-compute and Ceph | ||
| 152 | ===================== | ||
| 153 | |||
| 154 | On Controller node, when a VM is booted with command: | ||
| 155 | |||
| 156 | $ nova boot --image <glance image ID> ... | ||
| 157 | |||
| 158 | then on Compute note, if "libvirt_images_type = default" (in /etc/nova/nova.conf), | ||
| 159 | nova-compute will download the specified glance image from Controller node and | ||
| 160 | stores it locally (on Compute node). If "libvirt_images_type = rbd" then | ||
| 161 | nova-compute will import the specified glance image into "cinder-volumes" Ceph pool. | ||
| 162 | |||
| 163 | By default, "libvirt_images_type" has value of "default" and it can be changed during | ||
| 164 | compile time through variable LIBVIRT_IMAGES_TYPE. | ||
| 165 | |||
| 166 | nova-compute underneath uses libvirt to spawn VMs. If Ceph cinder volume is provided | ||
| 167 | while booting VM with option "--block-device <options>", then a libvirt secret must be | ||
| 168 | provided nova-compute to allow libvirt to authenticate with Cephx before libvirt is able | ||
| 169 | to mount Ceph block device. This libvirt secret is provided through "rbd_secret_uuid" | ||
| 170 | option in /etc/nova/nova.conf. | ||
| 171 | |||
| 172 | Therefore, on Compute node, if "libvirt_images_type = rbd" then the followings | ||
| 173 | are required: | ||
| 174 | |||
| 175 | * /etc/ceph/ceph.client.cinder-volume.keyring exist. This file contains | ||
| 176 | ceph client.cinder-volume's key, so that nova-compute can run some | ||
| 177 | restricted Ceph command allowed for cinder-volume Ceph client. For example: | ||
| 178 | |||
| 179 | $ rbd -p cinder-backups ls --id cinder-volume | ||
| 180 | |||
| 181 | should fail as "cinder-volume" Ceph client has no permission to touch | ||
| 182 | "cinder-backups" ceph pool. And the following should work: | ||
| 183 | |||
| 184 | $ rbd -p cinder-volumes ls --id cinder-volume | ||
| 185 | |||
| 186 | * Also, there must be existing a libvirt secret which stores Ceph | ||
| 187 | client.cinder-volume's key. | ||
| 188 | |||
| 189 | Right now, due to security and the booting order of Controller and Compute nodes, | ||
| 190 | these 2 requirements are not automatically satisfied at the boot time. | ||
| 191 | |||
| 192 | A script (/etc/ceph/set_nova_compute_cephx.sh) is provided to ease the task of | ||
| 193 | transferring ceph.client.cinder-volume.keyring from Controller node to Compute | ||
| 194 | node, and to create libvirt secret. On Controller node, manually runs (just one time): | ||
| 195 | |||
| 196 | $ /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute | ||
| 197 | |||
| 198 | The following configuration options in /etc/glance/glance-api.conf affect | ||
| 199 | on how nova-compute interacts with Ceph cluster: | ||
| 200 | |||
| 201 | libvirt_images_type = rbd | ||
| 202 | libvirt_images_rbd_pool=cinder-volumes | ||
| 203 | libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf | ||
| 204 | rbd_user=cinder-volume | ||
| 205 | rbd_secret_uuid=<libvirt secret UUID> | ||
| 206 | |||
| 207 | |||
| 208 | Ceph High Availability | ||
| 209 | ====================== | ||
| 210 | |||
| 211 | Ceph, by design, has strong high availability feature. Each Ceph object | ||
| 212 | can be replicated and stored into multiple independent physical disk | ||
| 213 | storages (controlled by Ceph OSD daemons) which are either in the same | ||
| 214 | machine or in separated machines. | ||
| 215 | |||
| 216 | The number of replication is configurable. In general, the higher the | ||
| 217 | number of replication, the higher Ceph availability, however, the down | ||
| 218 | side is the higher physical disk storage space required. | ||
| 219 | |||
| 220 | Also in general, each Ceph object replication should be stored in | ||
| 221 | different machines so that 1 machine goes down, the other replications | ||
| 222 | are still available. | ||
| 223 | |||
| 224 | Openstack default Ceph cluster is configured to have 2 replications. | ||
| 225 | However, these 2 replications are stored into the same machine (which | ||
| 226 | is Controller node). | ||
| 227 | |||
| 228 | |||
| 229 | Build Configuration Options | ||
| 230 | =========================== | ||
| 231 | |||
| 232 | * Controller build config options: | ||
| 233 | |||
| 234 | --enable-board=intel-xeon-core \ | ||
| 235 | --enable-rootfs=ovp-openstack-controller \ | ||
| 236 | --enable-kernel=preempt-rt \ | ||
| 237 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 238 | --with-template=feature/openstack-tests \ | ||
| 239 | --enable-unsupported-config=yes | ||
| 240 | |||
| 241 | * Compute build config options: | ||
| 242 | |||
| 243 | --enable-board=intel-xeon-core \ | ||
| 244 | --enable-rootfs=ovp-openstack-compute \ | ||
| 245 | --enable-kernel=preempt-rt \ | ||
| 246 | --enable-addons=wr-ovp-openstack,wr-ovp \ | ||
| 247 | --enable-unsupported-config=yes | ||
| 248 | |||
| 249 | |||
| 250 | Testing Commands and Expected Results | ||
| 251 | ===================================== | ||
| 252 | |||
| 253 | This section describes test steps and expected results to demonstrate that | ||
| 254 | Ceph is integrated properly into OpenStack | ||
| 255 | |||
| 256 | Please note: the following commands are carried on Controller node, unless | ||
| 257 | otherwise explicitly indicated. | ||
| 258 | |||
| 259 | $ Start Controller and Compute node in hardware targets | ||
| 260 | |||
| 261 | $ ps aux | grep ceph | ||
| 262 | |||
| 263 | root 2986 0.0 0.0 1059856 22320 ? Sl 02:50 0:08 /usr/bin/ceph-mon -i controller --pid-file /var/run/ceph/mon.controller.pid -c /etc/ceph/ceph.conf | ||
| 264 | root 3410 0.0 0.2 3578292 153144 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c /etc/ceph/ceph.conf | ||
| 265 | root 3808 0.0 0.0 3289468 34428 ? Ssl 02:50 0:36 /usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c /etc/ceph/ceph.conf | ||
| 266 | |||
| 267 | $ ceph osd lspools | ||
| 268 | |||
| 269 | 0 data,1 metadata,2 rbd,3 cinder-volumes,4 cinder-backups,5 images, | ||
| 270 | |||
| 271 | $ neutron net-create mynetwork | ||
| 272 | $ neutron net-list | ||
| 273 | |||
| 274 | +--------------------------------------+-----------+---------+ | ||
| 275 | | id | name | subnets | | ||
| 276 | +--------------------------------------+-----------+---------+ | ||
| 277 | | 15157fda-0940-4eba-853d-52338ace3362 | mynetwork | | | ||
| 278 | +--------------------------------------+-----------+---------+ | ||
| 279 | |||
| 280 | $ glance image-create --name myfirstimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 281 | $ nova boot --image myfirstimage --flavor 1 myinstance | ||
| 282 | $ nova list | ||
| 283 | |||
| 284 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 285 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 286 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 287 | | 26c2af98-dc78-465b-a6c2-bb52188d2b42 | myinstance | ACTIVE | - | Running | | | ||
| 288 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 289 | |||
| 290 | $ nova delete 26c2af98-dc78-465b-a6c2-bb52188d2b42 | ||
| 291 | |||
| 292 | +----+------+--------+------------+-------------+----------+ | ||
| 293 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 294 | +----+------+--------+------------+-------------+----------+ | ||
| 295 | +----+------+--------+------------+-------------+----------+ | ||
| 296 | |||
| 297 | $ Modify /etc/glance/glance-api.conf, | ||
| 298 | to change "default_store = file" to "default_store = rbd", | ||
| 299 | $ /etc/init.d/glance-api restart | ||
| 300 | |||
| 301 | $ /etc/cinder/add-cinder-volume-types.sh | ||
| 302 | $ cinder extra-specs-list | ||
| 303 | |||
| 304 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 305 | | ID | Name | extra_specs | | ||
| 306 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 307 | | 4cb4ae4a-600a-45fb-9332-aa72371c5985 | lvm_iscsi | {u'volume_backend_name': u'LVM_iSCSI'} | | ||
| 308 | | 83b3ee5f-a6f6-4fea-aeef-815169ee83b9 | glusterfs | {u'volume_backend_name': u'GlusterFS'} | | ||
| 309 | | c1570914-a53a-44e4-8654-fbd960130b8e | cephrbd | {u'volume_backend_name': u'RBD_CEPH'} | | ||
| 310 | | d38811d4-741a-4a68-afe3-fb5892160d7c | nfs | {u'volume_backend_name': u'Generic_NFS'} | | ||
| 311 | +--------------------------------------+-----------+------------------------------------------+ | ||
| 312 | |||
| 313 | $ glance image-create --name mysecondimage --is-public true --container-format bare --disk-format qcow2 --file /root/images/cirros-0.3.0-x86_64-disk.img | ||
| 314 | $ glance image-list | ||
| 315 | |||
| 316 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 317 | | ID | Name | Disk Format | Container Format | Size | Status | | ||
| 318 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 319 | | bec1580e-2475-4d1d-8d02-cca53732d17b | myfirstimage | qcow2 | bare | 9761280 | active | | ||
| 320 | | a223e5f7-a4b5-4239-96ed-a242db2a150a | mysecondimage | qcow2 | bare | 9761280 | active | | ||
| 321 | +--------------------------------------+---------------+-------------+------------------+---------+--------+ | ||
| 322 | |||
| 323 | $ rbd -p images ls | ||
| 324 | |||
| 325 | a223e5f7-a4b5-4239-96ed-a242db2a150a | ||
| 326 | |||
| 327 | $ cinder create --volume_type lvm_iscsi --image-id a223e5f7-a4b5-4239-96ed-a242db2a150a --display_name=lvm_vol_2 1 | ||
| 328 | $ cinder create --volume_type lvm_iscsi --display_name=lvm_vol_1 1 | ||
| 329 | $ cinder create --volume_type nfs --display_name nfs_vol_1 1 | ||
| 330 | $ cinder create --volume_type glusterfs --display_name glusterfs_vol_1 1 | ||
| 331 | $ cinder create --volume_type cephrbd --display_name cephrbd_vol_1 1 | ||
| 332 | $ cinder list | ||
| 333 | |||
| 334 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 335 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 336 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 337 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 338 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 339 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 340 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 341 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 342 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 343 | |||
| 344 | $ rbd -p cinder-volumes ls | ||
| 345 | |||
| 346 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 347 | |||
| 348 | (This uuid matches with the one in cinder list above) | ||
| 349 | |||
| 350 | $ cinder backup-create e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | ||
| 351 | (create a backup for lvm-iscsi volume) | ||
| 352 | |||
| 353 | $ cinder backup-create cea76733-b4ce-4e9a-9bfb-24cc3066070f | ||
| 354 | (create a backup for nfs volume, this should fails, as nfs volume | ||
| 355 | does not support volume backup) | ||
| 356 | |||
| 357 | $ cinder backup-create c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 358 | (create a backup for ceph volume) | ||
| 359 | |||
| 360 | $ cinder backup-create b0805546-be7a-4908-b1d5-21202fe6ea79 | ||
| 361 | (create a backup for gluster volume, this should fails, as glusterfs volume | ||
| 362 | does not support volume backup) | ||
| 363 | |||
| 364 | $ cinder backup-list | ||
| 365 | |||
| 366 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 367 | | ID | Volume ID | Status | Name | Size | Object Count | Container | | ||
| 368 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 369 | | 287502a0-aa4d-4065-93e0-f72fd5c239f5 | cea76733-b4ce-4e9a-9bfb-24cc3066070f | error | None | 1 | None | None | | ||
| 370 | | 2b0ca8a7-a827-4f1c-99d5-4fb7d9f25b5c | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | None | 1 | None | cinder-backups | | ||
| 371 | | 32d10c06-a742-45d6-9e13-777767ff5545 | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | None | 1 | None | cinder-backups | | ||
| 372 | | e2bdf21c-d378-49b3-b5e3-b398964b925c | b0805546-be7a-4908-b1d5-21202fe6ea79 | error | None | 1 | None | None | | ||
| 373 | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+----------------+ | ||
| 374 | |||
| 375 | $ rbd -p cinder-backups ls | ||
| 376 | |||
| 377 | volume-0c3f82ea-b3df-414e-b054-7a4977b7e354.backup.94358fed-6bd9-48f1-b67a-4d2332311a1f | ||
| 378 | volume-219a3250-50b4-4db0-9a6c-55e53041b65e.backup.base | ||
| 379 | |||
| 380 | (There should be only 2 backup volumes in the ceph cinder-backups pool) | ||
| 381 | |||
| 382 | $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls | ||
| 383 | |||
| 384 | 2014-03-17 13:03:54.617373 7f8673602780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication | ||
| 385 | 2014-03-17 13:03:54.617378 7f8673602780 0 librados: client.admin initialization error (2) No such file or directory | ||
| 386 | rbd: couldn't connect to the cluster! | ||
| 387 | |||
| 388 | (This should fails as compute node does not have ceph cinder-volume keyring yet) | ||
| 389 | |||
| 390 | $ /bin/bash /etc/ceph/set_nova_compute_cephx.sh cinder-volume root@compute | ||
| 391 | |||
| 392 | The authenticity of host 'compute (128.224.149.169)' can't be established. | ||
| 393 | ECDSA key fingerprint is 6a:79:95:fa:d6:56:0d:72:bf:5e:cb:59:e0:64:f6:7a. | ||
| 394 | Are you sure you want to continue connecting (yes/no)? yes | ||
| 395 | Warning: Permanently added 'compute,128.224.149.169' (ECDSA) to the list of known hosts. | ||
| 396 | root@compute's password: | ||
| 397 | Run virsh secret-define: | ||
| 398 | Secret 96dfc68f-3528-4bd0-a226-17a0848b05da created | ||
| 399 | |||
| 400 | Run virsh secret-set-value: | ||
| 401 | Secret value set | ||
| 402 | |||
| 403 | $ On Compute node: rbd -p cinder-volumes --id cinder-volume ls | ||
| 404 | |||
| 405 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 406 | |||
| 407 | $ On Compute node: to allow nova-compute to save glance image into | ||
| 408 | ceph (by default it saves at the local filesystem /etc/nova/instances) | ||
| 409 | |||
| 410 | modify /etc/nova/nova.conf to change: | ||
| 411 | |||
| 412 | libvirt_images_type = default | ||
| 413 | |||
| 414 | into | ||
| 415 | |||
| 416 | libvirt_images_type = rbd | ||
| 417 | |||
| 418 | $ On Compute node: /etc/init.d/nova-compute restart | ||
| 419 | |||
| 420 | $ nova boot --flavor 1 \ | ||
| 421 | --image mysecondimage \ | ||
| 422 | --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ | ||
| 423 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 424 | --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ | ||
| 425 | --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ | ||
| 426 | myinstance | ||
| 427 | |||
| 428 | $ rbd -p cinder-volumes ls | ||
| 429 | |||
| 430 | instance-00000002_disk | ||
| 431 | volume-219a3250-50b4-4db0-9a6c-55e53041b65e | ||
| 432 | |||
| 433 | (We should see instance-000000xx_disk ceph object) | ||
| 434 | |||
| 435 | $ nova list | ||
| 436 | |||
| 437 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 438 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 439 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 440 | | 2a6aeff9-5a35-45a1-b8c4-0730df2a767a | myinstance | ACTIVE | - | Running | | | ||
| 441 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 442 | |||
| 443 | $ From dashboard, log into VM console run "cat /proc/partitions" | ||
| 444 | |||
| 445 | Should be able to login and see vdb, vdc, vdd, vdde 1G block devices | ||
| 446 | |||
| 447 | $ nova delete 2a6aeff9-5a35-45a1-b8c4-0730df2a767a | ||
| 448 | |||
| 449 | $ nova list | ||
| 450 | |||
| 451 | +----+------+--------+------------+-------------+----------+ | ||
| 452 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 453 | +----+------+--------+------------+-------------+----------+ | ||
| 454 | +----+------+--------+------------+-------------+----------+ | ||
| 455 | |||
| 456 | $ rbd -p cinder-volumes ls | ||
| 457 | |||
| 458 | volume-c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | ||
| 459 | |||
| 460 | (The instance instance-00000010_disk should be gone) | ||
| 461 | |||
| 462 | $ nova boot --flavor 1 \ | ||
| 463 | --image mysecondimage \ | ||
| 464 | --block-device source=volume,id=b0805546-be7a-4908-b1d5-21202fe6ea79,dest=volume,shutdown=preserve \ | ||
| 465 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 466 | --block-device source=volume,id=cea76733-b4ce-4e9a-9bfb-24cc3066070f,dest=volume,shutdown=preserve \ | ||
| 467 | --block-device source=volume,id=e85a0e2c-6dc4-4182-ab7d-f3950f225ee4,dest=volume,shutdown=preserve \ | ||
| 468 | myinstance | ||
| 469 | |||
| 470 | $ nova list | ||
| 471 | |||
| 472 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 473 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 474 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 475 | | c1866b5f-f731-4d9c-b855-7f82f3fb314f | myinstance | ACTIVE | - | Running | | | ||
| 476 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 477 | |||
| 478 | $ From dashboard, log into VM console run "cat /proc/partitions" | ||
| 479 | |||
| 480 | Should be able to login and see vdb, vdc, vdd, vdde 1G block devices | ||
| 481 | |||
| 482 | $ nova delete c1866b5f-f731-4d9c-b855-7f82f3fb314f | ||
| 483 | $ nova list | ||
| 484 | |||
| 485 | +----+------+--------+------------+-------------+----------+ | ||
| 486 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 487 | +----+------+--------+------------+-------------+----------+ | ||
| 488 | +----+------+--------+------------+-------------+----------+ | ||
| 489 | |||
| 490 | $ cinder list | ||
| 491 | |||
| 492 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 493 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 494 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 495 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 496 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 497 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 498 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 499 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 500 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 501 | |||
| 502 | (All the volume should be available) | ||
| 503 | |||
| 504 | $ ceph -s | ||
| 505 | |||
| 506 | cluster 9afd3ca8-50e0-4f71-9fc0-e9034d14adf0 | ||
| 507 | health HEALTH_OK | ||
| 508 | monmap e1: 1 mons at {controller=128.224.149.168:6789/0}, election epoch 2, quorum 0 controller | ||
| 509 | osdmap e22: 2 osds: 2 up, 2 in | ||
| 510 | pgmap v92: 342 pgs, 6 pools, 9532 kB data, 8 objects | ||
| 511 | 2143 MB used, 18316 MB / 20460 MB avail | ||
| 512 | 342 active+clean | ||
| 513 | |||
| 514 | (Should see "health HEALTH_OK" which indicates Ceph cluster is all good) | ||
| 515 | |||
| 516 | $ nova boot --flavor 1 \ | ||
| 517 | --image myfirstimage \ | ||
| 518 | --block-device source=volume,id=c905b9b1-10cb-413b-a949-c86ff3c1c4c6,dest=volume,shutdown=preserve \ | ||
| 519 | myinstance | ||
| 520 | |||
| 521 | (Booting VM with only existing CephRBD Cinder volume as block device) | ||
| 522 | |||
| 523 | $ nova list | ||
| 524 | |||
| 525 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 526 | | ID | Name | Status | Task State | Power State | Networks | | ||
| 527 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 528 | | 4e984fd0-a0af-435f-84a1-ecd6b24b7256 | myinstance | ACTIVE | - | Running | | | ||
| 529 | +--------------------------------------+------------+--------+------------+-------------+----------+ | ||
| 530 | |||
| 531 | $ From dashboard, log into VM console. Assume that the second partition (CephRbd) | ||
| 532 | is /dev/vdb | ||
| 533 | $ On VM, run: "sudo mkfs.ext4 /dev/vdb && sudo mkdir ceph && sudo mount /dev/vdb ceph && sudo chmod 777 -R ceph" | ||
| 534 | $ On VM, run: "echo "Hello World" > ceph/test.log && dd if=/dev/urandom of=ceph/512M bs=1M count=512 && sync" | ||
| 535 | $ On VM, run: "cat ceph/test.log && sudo umount ceph" | ||
| 536 | |||
| 537 | Hello World | ||
| 538 | |||
| 539 | $ /etc/init.d/ceph stop osd.0 | ||
| 540 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_1.log" | ||
| 541 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 542 | |||
| 543 | Hello World | ||
| 544 | Hello World | ||
| 545 | |||
| 546 | $ /etc/init.d/ceph start osd.0 | ||
| 547 | $ Wait until "ceph -s" shows "health HEALTH_OK" | ||
| 548 | $ /etc/init.d/ceph stop osd.1 | ||
| 549 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_2.log" | ||
| 550 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 551 | |||
| 552 | Hello World | ||
| 553 | Hello World | ||
| 554 | Hello World | ||
| 555 | |||
| 556 | $ /etc/init.d/ceph stop osd.0 | ||
| 557 | (Both Ceph OSD daemons are down, so no Ceph Cinder volume available) | ||
| 558 | |||
| 559 | $ On VM, run "sudo mount /dev/vdb ceph" | ||
| 560 | (Stuck mounting forever, as Ceph Cinder volume is not available) | ||
| 561 | |||
| 562 | $ /etc/init.d/ceph start osd.0 | ||
| 563 | $ /etc/init.d/ceph start osd.1 | ||
| 564 | $ On VM, the previous mount should pass | ||
| 565 | $ On VM, run: "sudo mount /dev/vdb ceph && echo "Hello World" > ceph/test_3.log" | ||
| 566 | $ On VM, run: "cat ceph/test*.log && sudo umount ceph" | ||
| 567 | |||
| 568 | Hello World | ||
| 569 | Hello World | ||
| 570 | Hello World | ||
| 571 | Hello World | ||
| 572 | |||
| 573 | $ nova delete 4e984fd0-a0af-435f-84a1-ecd6b24b7256 | ||
| 574 | $ cinder list | ||
| 575 | |||
| 576 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 577 | | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | | ||
| 578 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 579 | | 4a32a4fb-b670-4ed7-8dc8-f4e6f9b52db3 | available | lvm_vol_2 | 1 | lvm_iscsi | true | | | ||
| 580 | | b0805546-be7a-4908-b1d5-21202fe6ea79 | available | glusterfs_vol_1 | 1 | glusterfs | false | | | ||
| 581 | | c905b9b1-10cb-413b-a949-c86ff3c1c4c6 | available | cephrbd_vol_1 | 1 | cephrbd | false | | | ||
| 582 | | cea76733-b4ce-4e9a-9bfb-24cc3066070f | available | nfs_vol_1 | 1 | nfs | false | | | ||
| 583 | | e85a0e2c-6dc4-4182-ab7d-f3950f225ee4 | available | lvm_vol_1 | 1 | lvm_iscsi | false | | | ||
| 584 | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | ||
| 585 | |||
| 586 | (All the volume should be available) | ||
| 587 | |||
| 588 | |||
| 589 | Additional References | ||
| 590 | ===================== | ||
| 591 | |||
| 592 | * https://ceph.com/docs/master/rbd/rbd-openstack/ | ||
