Table of Contents
Integration with OpenStack services
Instructions to integrate Ceph with OpenStack (mitaka version)
Install ceph on OpenStack nodes
The following instructions need to be done:
- On the controller nodes
- On all the compute nodes (if ceph will be integrated with Cinder) or only on the compute nodes that will use ceph for the nova ephemeral storage
rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm yum clean all yum install ceph-common scp <ceph-node>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Create ceph pools and ceph users
Create ceph pools:
ceph osd pool create volumes-test 128 128 ceph osd pool create images-test 128 128 ceph osd pool create backups-test 128 128 ceph osd pool create vms-test 128 128
Create ceph users:
ceph auth get-or-create client.cinder-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes-test, allow rwx pool=vms-test, allow rwx pool=images-test' -o /etc/ceph/ceph.client.cinder-test.keyring ceph auth get-or-create client.glance-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images-test' -o /etc/ceph/ceph.client.glance-test.keyring
Configure libvirt on compute nodes
On a ceph node extracts on a file the key for the cinder ceph user:
ceph auth get-key client.cinder-test -o client.cinder-test.key
Create a secret.xml
file as in the following example
[root@ceph-mon-01 ~]# uuidgen c5466c9f-dce2-420f-ada7-c4aa97b58a58 [root@ceph-mon-01 ~]# cat > secret.xml <<EOF > <secret ephemeral='no' private='no'> > <uuid>c5466c9f-dce2-420f-ada7-c4aa97b58a58</uuid> > <usage type='ceph'> > <name>client.cinder-test secret</name> > </usage> > </secret> > EOF
The instructions reported below are needed:
- on all compute nodes if cinder will be integrated with ceph
- on the compute nodes that will use ceph for the nova ephemeral storage
First of all copy the client.cinder-test.key
and secret.xml
file on the compute node.
Then issue the following commands:
# virsh secret-define --file secret.xml Secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 created # virsh secret-set-value --secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 --base64 $(cat client.cinder-test.key) Secret value set # rm client.cinder-test.key secret.xml
On all compute nodes (if ceph has been enabled for cinder) or just on all compute nodes when ceph is used for the ephemeral nova storage, edit /etc/nova/nova.conf
adding in the [libvirt] section:
rbd_user = cinder-test rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58
Configure ceph as additional backend for cinder
Starting point: a CentOS mitaka Cloud using gluster as backend for Gluster. The following instructions explain how to add another backend using ceph
On the controller/storage node edit /etc/ceph/ceph.conf
adding in the global
section the following lines:
rbd default format = 2 rbd default features = 3
This is needed to disable some features not implemented in the centos7 default kernel (otherwise the attachment doesn't work)
Copy the /etc/ceph/ceph.client.cinder-test.keyring
generated above (on a ceph node) to the controller/storage nodes, and set the proper ownership and mode:
chown cinder.cinder /etc/ceph/ceph.client.cinder-test.keyring chmod 0640 /etc/ceph/ceph.client.cinder-test.keyring
On the controller/storage node you need to do some changes to the /etc/cinder/cinder.conf
, to:
- to enable the ceph backend (attribute
enabled_backends
) - to set ceph as default volume type (attribute
default_volume_type
) - to create two sections for the gluster and ceph volume types
- the attributes
volume_drive
,glusterfs_shares_config
andnas_volume_prov_type
will need to be moved from the DEFAULT section
[DEFAULT] enabled_backends=ceph,gluster default_volume_type=ceph [gluster] volume_group=gluster volume_backend_name=gluster volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver glusterfs_shares_config = /etc/cinder/shares nas_volume_prov_type = thin [ceph] volume_group=ceph volume_backend_name=ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes-test rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder-test rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 #uuid set in the secret.xml file (see above)
Restart the cinder services on the controller/storage nodes:
# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service
Disable the "old" cinder-volume services:
[root@blade-indigo-01 ~]# cinder service-list +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-22T16:29:37.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | enabled | down | 2017-03-21T14:03:40.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-22T16:29:39.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled | up | 2017-03-22T16:29:39.000000 | - | +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ [root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it cinder-volume +----------------------------------+---------------+----------+ | Host | Binary | Status | +----------------------------------+---------------+----------+ | blade-indigo-01.cloud.pd.infn.it | cinder-volume | disabled | +----------------------------------+---------------+----------+ [root@blade-indigo-01 ~]# cinder service-list +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-22T16:34:57.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | disabled | down | 2017-03-22T16:34:57.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-22T16:34:59.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled | up | 2017-03-22T16:34:59.000000 | - | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ [root@blade-indigo-01 ~]#
Create the volume types:
# cinder type-create gluster # cinder type-key gluster set volume_backend_name=gluster # cinder type-create ceph # cinder type-key ceph set volume_backend_name=ceph
For the volumes created before this change, there will be problems attaching/detaching them unless some changes are node in the database, as explained in http://dischord.org/2015/12/22/cinder-multi-backend-with-multiple-ceph-pools/
Example of volume created before the change:
MariaDB [cinder]> select host from volumes where id='3af86ecd-40a8-4302-a68d-191c57f4a493'; +--------------------------------------------+ | host | +--------------------------------------------+ | blade-indigo-01.cloud.pd.infn.it#GlusterFS | +--------------------------------------------+ 1 row in set (0.00 sec)
It is necessary to change the host (that must be <controller>@gluster#gluster
):
MariaDB [cinder]> update volumes set host='blade-indigo-01.cloud.pd.infn.it@gluster#gluster' where id='3af86ecd-40a8-4302-a68d-191c57f4a493'; Query OK, 1 row affected (0.03 sec) Rows matched: 1 Changed: 1 Warnings: 0 MariaDB [cinder]> select host from volumes where id='3af86ecd-40a8-4302-a68d-191c57f4a493'; +--------------------------------------------------+ | host | +--------------------------------------------------+ | blade-indigo-01.cloud.pd.infn.it@gluster#gluster | +--------------------------------------------------+ 1 row in set (0.00 sec) MariaDB [cinder]>
If you want to prevent the creation of gluster volumes, "hide" the gluster type:
[root@blade-indigo-01 ~]# cinder type-list +--------------------------------------+---------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+---------+-------------+-----------+ | 10f3f44b-7b31-4291-9566-7c0d23075be9 | ceph | - | True | | abc5bd48-3259-486d-b604-e36976c00699 | gluster | - | True | +--------------------------------------+---------+-------------+-----------+ [root@blade-indigo-01 ~]# cinder type-update --is-public false abc5bd48-3259-486d-b604-e36976c00699 +--------------------------------------+---------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+---------+-------------+-----------+ | abc5bd48-3259-486d-b604-e36976c00699 | gluster | - | False | +--------------------------------------+---------+-------------+-----------+
and disable the relevant service:
[root@blade-indigo-01 ~]# cinder service-list +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-24T14:51:41.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | disabled | down | 2017-03-22T16:34:57.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-24T14:51:41.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled | up | 2017-03-24T14:51:41.000000 | - | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ [root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it@gluster cinder-volume +------------------------------------------+---------------+----------+ | Host | Binary | Status | +------------------------------------------+---------------+----------+ | blade-indigo-01.cloud.pd.infn.it@gluster | cinder-volume | disabled | +------------------------------------------+---------------+----------+ [root@blade-indigo-01 ~]# cinder service-list +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-24T14:52:01.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | disabled | down | 2017-03-22T16:34:57.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-24T14:52:01.000000 | - | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | disabled | up | 2017-03-24T14:52:01.000000 | - | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ [root@blade-indigo-01 ~]#
Configure ceph as ephemeral storage for nova on a compute node
Copy the /etc/ceph/ceph.client.cinder-test.keyring
generated above (on a ceph node) to the compute node, and set the proper ownership and mode:
chown nova.nova /etc/ceph/ceph.client.cinder-test.keyring chmod 0640 /etc/ceph/ceph.client.cinder-test.keyring
Modify nova.conf on the compute node:
[libvirt] images_type = rbd images_rbd_pool = vms-test images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder-test rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 # the uuid used in the secret.xml file disk_cachemodes="network=writeback" live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Restart the nova-compute service:
systemctl restart openstack-nova-compute.service
Configure ceph as additional backend for glance
Copy the file ==/etc/ceph/ceph.client.glance-test.keyring== (generate above on a ceph node) to the controller node and set the proper ownership/mode:
chown glance.glance /etc/ceph/ceph.client.glance-test.keyring chmod 0640 /etc/ceph/ceph.client.glance-test.keyring
Modify on the controller node the glance-api.conf
node in the following way:
{DEFAULT] ... show_multiple_locations = True show_image_direct_url = True ... ... [glance_store] #stores = file,http stores = file,http,rbd #default_store = file default_store = rbd rbd_store_pool = images-test rbd_store_user = glance-test rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8
Restart the glance services:
systemctl restart openstack-glance-api.service openstack-glance-registry.service
New images will be written in ceph (because of the default_store
setting). Images on gluster will still be usable.
Move cinder volumes from gluster to ceph
To move a cinder volume from the gluster backend to the ceph backend the idea is to:
- create a new volume in the ceph backend
- copy the content of the volume (using rsync or any other tool)
- remove the original volume
Move glance images from file (gluster) backend to the ceph backend
Let's consider an image that was created before adding the ceph backend and which therefore is stored in the file (gluster) backend.
Let's suppose that we want to move the image on ceph.
We can't simply delete and re-register the image since its id would change, and this would be a problem for instances created using that image
Let's consider that this is the image
[root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0 +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | ae8862f78e585b65271984cf0f4b1c83 | | container_format | bare | | created_at | 2017-04-14T14:48:18Z | | direct_url | file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0 | | disk_format | qcow2 | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | | locations | [{"url": "file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0", | | | "metadata": {}}] | | min_disk | 0 | | min_ram | 0 | | name | c7-qcow2-glutser | | owner | a0480526a8e749cc89d7782b4aaba279 | | protected | False | | size | 1360199680 | | status | active | | tags | [] | | updated_at | 2017-04-14T14:48:55Z | | virtual_size | None | | visibility | public | +------------------+----------------------------------------------------------------------------------+ [root@blade-indigo-01 ~]#
Let's download the image.
[root@blade-indigo-01 ~]# glance image-download --progress --file c1f096cf-90e9-4448-8f46-6f3a90a779e0.file c1f096cf-90e9-4448-8f46-6f3a90a779e0 [=============================>] 100% [root@blade-indigo-01 ~]# [root@blade-indigo-01 ~]# ls -l c1f096cf-90e9-4448-8f46-6f3a90a779e0.file -rw-r--r-- 1 root root 1360199680 Apr 14 17:26 c1f096cf-90e9-4448-8f46-6f3a90a779e0.file [root@blade-indigo-01 ~]#
Let's upload the image using the rbd-client.py tool (implemented by Lisa Zangrando, available at https://github.com/CloudPadovana/GlanceFileToCeph/blob/master/rbd_client.py)
[root@blade-indigo-01 ~]# ./rbd_client.py -i c1f096cf-90e9-4448-8f46-6f3a90a779e0 -f c1f096cf-90e9-4448-8f46-6f3a90a779e0.file url: rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap image size: 1360199680 checksum: ae8862f78e585b65271984cf0f4b1c83 [root@blade-indigo-01 ~]#
The script returns:
- the ceph URL location
- the size (check that it is the same reported by the glance image-show command)
- the checksum (check that it is the same reported by the glance image-show command)
Let's add the returned rbd location to the original image, and let's remove the file location:
[root@blade-indigo-01 ~]# glance location-add --url rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap c1f096cf-90e9-4448-8f46-6f3a90a779e0 +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | ae8862f78e585b65271984cf0f4b1c83 | | container_format | bare | | created_at | 2017-04-14T14:48:18Z | | direct_url | file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0 | | disk_format | qcow2 | | file | /v2/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0/file | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | | locations | [{"url": "file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0", | | | "metadata": {}}, {"url": "rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test | | | /c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap", "metadata": {}}] | | min_disk | 0 | | min_ram | 0 | | name | c7-qcow2-glutser | | owner | a0480526a8e749cc89d7782b4aaba279 | | protected | False | | schema | /v2/schemas/image | | size | 1360199680 | | status | active | | tags | [] | | updated_at | 2017-04-14T15:37:17Z | | virtual_size | None | | visibility | public | +------------------+----------------------------------------------------------------------------------+ [root@blade-indigo-01 ~]# glance location-delete --url file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0 c1f096cf-90e9-4448-8f46-6f3a90a779e0 [root@blade-indigo-01 ~]# [root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0 +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | ae8862f78e585b65271984cf0f4b1c83 | | container_format | bare | | created_at | 2017-04-14T14:48:18Z | | direct_url | rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf- | | | 90e9-4448-8f46-6f3a90a779e0/snap | | disk_format | qcow2 | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | | locations | [{"url": "rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf- | | | 90e9-4448-8f46-6f3a90a779e0/snap", "metadata": {}}] | | min_disk | 0 | | min_ram | 0 | | name | c7-qcow2-glutser | | owner | a0480526a8e749cc89d7782b4aaba279 | | protected | False | | size | 1360199680 | | status | active | | tags | [] | | updated_at | 2017-04-14T15:38:23Z | | virtual_size | None | | visibility | public | +------------------+----------------------------------------------------------------------------------+