progetti:cloud-areapd:ceph:integration_with_openstack_services
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| progetti:cloud-areapd:ceph:integration_with_openstack_services [2017/03/24 14:54] – [Create ceph pools and ceph users] sgaravat@infn.it | progetti:cloud-areapd:ceph:integration_with_openstack_services [2017/06/19 14:51] (current) – [Configure ceph as additional backend for glance] segatta@infn.it | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| + | ======Integration with OpenStack services====== | ||
| + | |||
| + | // | ||
| + | |||
| + | ==== Install ceph on OpenStack nodes ==== | ||
| + | |||
| + | The following instructions need to be done: | ||
| + | |||
| + | * On the controller nodes | ||
| + | * On all the compute nodes (if ceph will be integrated with Cinder) or only on the compute nodes that will use ceph for the nova ephemeral storage | ||
| + | |||
| + | <code bash> | ||
| + | rpm -Uvh https:// | ||
| + | yum clean all | ||
| + | yum install ceph-common | ||
| + | scp < | ||
| + | </ | ||
| + | |||
| + | |||
| + | |||
| + | ==== Create ceph pools and ceph users ==== | ||
| + | |||
| + | Create ceph pools: | ||
| + | |||
| + | <code bash> | ||
| + | ceph osd pool create volumes-test 128 128 | ||
| + | ceph osd pool create images-test 128 128 | ||
| + | ceph osd pool create backups-test 128 128 | ||
| + | ceph osd pool create vms-test 128 128 | ||
| + | </ | ||
| + | |||
| + | Create ceph users: | ||
| + | |||
| + | |||
| + | <code bash> | ||
| + | ceph auth get-or-create client.cinder-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, | ||
| + | |||
| + | |||
| + | ceph auth get-or-create client.glance-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, | ||
| + | </ | ||
| + | |||
| + | ==== Configure libvirt on compute nodes ==== | ||
| + | |||
| + | On a ceph node extracts on a file the key for the cinder ceph user: | ||
| + | |||
| + | <code bash> | ||
| + | ceph auth get-key client.cinder-test -o client.cinder-test.key | ||
| + | </ | ||
| + | |||
| + | Create a '' | ||
| + | |||
| + | <code bash> | ||
| + | [root@ceph-mon-01 ~]# uuidgen | ||
| + | c5466c9f-dce2-420f-ada7-c4aa97b58a58 | ||
| + | [root@ceph-mon-01 ~]# cat > secret.xml <<EOF | ||
| + | > <secret ephemeral=' | ||
| + | > < | ||
| + | > < | ||
| + | > < | ||
| + | > </ | ||
| + | > </ | ||
| + | > EOF | ||
| + | |||
| + | </ | ||
| + | |||
| + | The instructions reported below are needed: | ||
| + | |||
| + | * on all compute nodes if cinder will be integrated with ceph | ||
| + | * on the compute nodes that will use ceph for the nova ephemeral storage | ||
| + | |||
| + | |||
| + | First of all copy the '' | ||
| + | |||
| + | Then issue the following commands: | ||
| + | |||
| + | |||
| + | <code bash> | ||
| + | # virsh secret-define --file secret.xml | ||
| + | Secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 created | ||
| + | |||
| + | # virsh secret-set-value --secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 --base64 $(cat client.cinder-test.key) | ||
| + | Secret value set | ||
| + | |||
| + | # rm client.cinder-test.key secret.xml | ||
| + | |||
| + | </ | ||
| + | |||
| + | On all compute nodes (if ceph has been enabled for cinder) or just on all compute nodes when ceph is used for the ephemeral nova storage, edit ''/ | ||
| + | |||
| + | < | ||
| + | rbd_user = cinder-test | ||
| + | rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 | ||
| + | </ | ||
| + | ==== Configure ceph as additional backend for cinder ==== | ||
| + | |||
| + | **// | ||
| + | The following instructions explain how to add another backend using ceph//** | ||
| + | |||
| + | |||
| + | On the controller/ | ||
| + | |||
| + | < | ||
| + | rbd default format = 2 | ||
| + | rbd default features = 3 | ||
| + | </ | ||
| + | |||
| + | This is needed to disable some features not implemented in the centos7 default kernel (otherwise the attachment doesn' | ||
| + | |||
| + | Copy the ''/ | ||
| + | |||
| + | <code bash> | ||
| + | chown cinder.cinder / | ||
| + | chmod 0640 / | ||
| + | </ | ||
| + | |||
| + | |||
| + | On the controller/ | ||
| + | |||
| + | * to enable the ceph backend (attribute '' | ||
| + | * to set ceph as default volume type (attribute '' | ||
| + | * to create two sections for the gluster and ceph volume types | ||
| + | * the attributes '' | ||
| + | |||
| + | < | ||
| + | |||
| + | [DEFAULT] | ||
| + | |||
| + | enabled_backends=ceph, | ||
| + | default_volume_type=ceph | ||
| + | |||
| + | [gluster] | ||
| + | volume_group=gluster | ||
| + | volume_backend_name=gluster | ||
| + | volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver | ||
| + | glusterfs_shares_config = / | ||
| + | nas_volume_prov_type = thin | ||
| + | |||
| + | [ceph] | ||
| + | volume_group=ceph | ||
| + | volume_backend_name=ceph | ||
| + | volume_driver = cinder.volume.drivers.rbd.RBDDriver | ||
| + | rbd_pool = volumes-test | ||
| + | rbd_ceph_conf = / | ||
| + | rbd_flatten_volume_from_snapshot = false | ||
| + | rbd_max_clone_depth = 5 | ||
| + | rbd_store_chunk_size = 4 | ||
| + | rados_connect_timeout = -1 | ||
| + | glance_api_version = 2 | ||
| + | rbd_user = cinder-test | ||
| + | rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 #uuid set in the secret.xml file (see above) | ||
| + | </ | ||
| + | |||
| + | Restart the cinder services on the controller/ | ||
| + | |||
| + | <code bash> | ||
| + | # systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service | ||
| + | </ | ||
| + | |||
| + | |||
| + | Disable the " | ||
| + | |||
| + | <code bash> | ||
| + | |||
| + | | ||
| + | +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ | ||
| + | | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | | ||
| + | +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ | ||
| + | | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-22T16: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | enabled | down | 2017-03-21T14: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-22T16: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled | up | 2017-03-22T16: | ||
| + | +------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+ | ||
| + | [root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it cinder-volume | ||
| + | +----------------------------------+---------------+----------+ | ||
| + | | Host | Binary | Status | | ||
| + | +----------------------------------+---------------+----------+ | ||
| + | | blade-indigo-01.cloud.pd.infn.it | cinder-volume | disabled | | ||
| + | +----------------------------------+---------------+----------+ | ||
| + | [root@blade-indigo-01 ~]# cinder service-list | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | cinder-scheduler | blade-indigo-01.cloud.pd.infn.it | nova | enabled | up | 2017-03-22T16: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it | nova | disabled | down | 2017-03-22T16: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@ceph | nova | enabled | up | 2017-03-22T16: | ||
| + | | cinder-volume | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled | up | 2017-03-22T16: | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | </ | ||
| + | |||
| + | Create the volume types: | ||
| + | |||
| + | <code bash> | ||
| + | # cinder type-create gluster | ||
| + | # cinder type-key gluster set volume_backend_name=gluster | ||
| + | # cinder type-create ceph | ||
| + | # cinder type-key ceph set volume_backend_name=ceph | ||
| + | </ | ||
| + | |||
| + | For the volumes created before this change, there will be problems attaching/ | ||
| + | |||
| + | Example of volume created before the change: | ||
| + | |||
| + | <code bash> | ||
| + | |||
| + | MariaDB [cinder]> | ||
| + | +--------------------------------------------+ | ||
| + | | host | | ||
| + | +--------------------------------------------+ | ||
| + | | blade-indigo-01.cloud.pd.infn.it# | ||
| + | +--------------------------------------------+ | ||
| + | 1 row in set (0.00 sec) | ||
| + | </ | ||
| + | |||
| + | It is necessary to change the host (that must be ''< | ||
| + | |||
| + | <code bash> | ||
| + | |||
| + | MariaDB [cinder]> | ||
| + | Query OK, 1 row affected (0.03 sec) | ||
| + | Rows matched: 1 Changed: 1 Warnings: 0 | ||
| + | |||
| + | MariaDB [cinder]> | ||
| + | +--------------------------------------------------+ | ||
| + | | host | | ||
| + | +--------------------------------------------------+ | ||
| + | | blade-indigo-01.cloud.pd.infn.it@gluster# | ||
| + | +--------------------------------------------------+ | ||
| + | 1 row in set (0.00 sec) | ||
| + | |||
| + | MariaDB [cinder]> | ||
| + | </ | ||
| + | |||
| + | If you want to prevent the creation of gluster volumes, " | ||
| + | |||
| + | <code bash> | ||
| + | [root@blade-indigo-01 ~]# cinder type-list | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | | ID | | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | | 10f3f44b-7b31-4291-9566-7c0d23075be9 | | ||
| + | | abc5bd48-3259-486d-b604-e36976c00699 | gluster | - | True | | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | [root@blade-indigo-01 ~]# cinder type-update --is-public false abc5bd48-3259-486d-b604-e36976c00699 | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | | ID | | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | | abc5bd48-3259-486d-b604-e36976c00699 | gluster | - | | ||
| + | +--------------------------------------+---------+-------------+-----------+ | ||
| + | </ | ||
| + | |||
| + | and disable the relevant service: | ||
| + | |||
| + | <code bash> | ||
| + | [root@blade-indigo-01 ~]# cinder service-list | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | Binary | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | cinder-scheduler | | ||
| + | | cinder-volume | ||
| + | | cinder-volume | ||
| + | | cinder-volume | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | [root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it@gluster cinder-volume | ||
| + | +------------------------------------------+---------------+----------+ | ||
| + | | | ||
| + | +------------------------------------------+---------------+----------+ | ||
| + | | blade-indigo-01.cloud.pd.infn.it@gluster | cinder-volume | disabled | | ||
| + | +------------------------------------------+---------------+----------+ | ||
| + | [root@blade-indigo-01 ~]# cinder service-list | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | Binary | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | | cinder-scheduler | | ||
| + | | cinder-volume | ||
| + | | cinder-volume | ||
| + | | cinder-volume | ||
| + | +------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+ | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | </ | ||
| + | |||
| + | ==== Configure ceph as ephemeral storage for nova on a compute node ==== | ||
| + | |||
| + | |||
| + | |||
| + | Copy the ''/ | ||
| + | |||
| + | <code bash> | ||
| + | chown nova.nova / | ||
| + | chmod 0640 / | ||
| + | </ | ||
| + | |||
| + | Modify nova.conf on the compute node: | ||
| + | |||
| + | < | ||
| + | |||
| + | [libvirt] | ||
| + | images_type = rbd | ||
| + | images_rbd_pool = vms-test | ||
| + | images_rbd_ceph_conf = / | ||
| + | rbd_user = cinder-test | ||
| + | rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 # the uuid used in the secret.xml file | ||
| + | disk_cachemodes=" | ||
| + | live_migration_flag=" | ||
| + | </ | ||
| + | |||
| + | Restart the nova-compute service: | ||
| + | |||
| + | <code bash> | ||
| + | systemctl restart openstack-nova-compute.service | ||
| + | </ | ||
| + | |||
| + | |||
| + | ==== Configure ceph as additional backend for glance ==== | ||
| + | |||
| + | Copy the file ==/ | ||
| + | |||
| + | <code bash> | ||
| + | chown glance.glance / | ||
| + | chmod 0640 / | ||
| + | </ | ||
| + | |||
| + | Modify on the controller node the '' | ||
| + | |||
| + | < | ||
| + | {DEFAULT] | ||
| + | ... | ||
| + | show_multiple_locations = True | ||
| + | show_image_direct_url = True | ||
| + | ... | ||
| + | ... | ||
| + | [glance_store] | ||
| + | #stores = file,http | ||
| + | stores = file, | ||
| + | # | ||
| + | default_store = rbd | ||
| + | rbd_store_pool = images-test | ||
| + | rbd_store_user = glance-test | ||
| + | rbd_store_ceph_conf = / | ||
| + | rbd_store_chunk_size = 8 | ||
| + | </ | ||
| + | |||
| + | Restart the glance services: | ||
| + | |||
| + | < | ||
| + | systemctl restart openstack-glance-api.service | ||
| + | </ | ||
| + | |||
| + | New images will be written in ceph (because of the '' | ||
| + | |||
| + | |||
| + | ==== Move cinder volumes from gluster to ceph ==== | ||
| + | |||
| + | To move a cinder volume from the gluster backend to the ceph backend the idea is to: | ||
| + | |||
| + | * create a new volume in the ceph backend | ||
| + | * copy the content of the volume (using rsync or any other tool) | ||
| + | * remove the original volume | ||
| + | |||
| + | ==== Move glance images from file (gluster) backend to the ceph backend ==== | ||
| + | |||
| + | Let's consider an image that was created before adding the ceph backend and which therefore is stored in the file (gluster) backend. | ||
| + | |||
| + | Let's suppose that we want to move the image on ceph. | ||
| + | |||
| + | We can't simply delete and re-register the image since its id would change, and this would be a problem for instances created using that image | ||
| + | |||
| + | Let's consider that this is the image | ||
| + | <code bash> | ||
| + | [root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | Property | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | checksum | ||
| + | | container_format | bare | | ||
| + | | created_at | ||
| + | | direct_url | ||
| + | | disk_format | ||
| + | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | | locations | ||
| + | | | " | ||
| + | | min_disk | ||
| + | | min_ram | ||
| + | | name | c7-qcow2-glutser | ||
| + | | owner | a0480526a8e749cc89d7782b4aaba279 | ||
| + | | protected | ||
| + | | size | 1360199680 | ||
| + | | status | ||
| + | | tags | [] | | ||
| + | | updated_at | ||
| + | | virtual_size | ||
| + | | visibility | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | </ | ||
| + | |||
| + | Let's download the image. | ||
| + | |||
| + | < | ||
| + | [root@blade-indigo-01 ~]# glance image-download --progress --file c1f096cf-90e9-4448-8f46-6f3a90a779e0.file c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | [=============================> | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | [root@blade-indigo-01 ~]# ls -l c1f096cf-90e9-4448-8f46-6f3a90a779e0.file | ||
| + | -rw-r--r-- 1 root root 1360199680 Apr 14 17:26 c1f096cf-90e9-4448-8f46-6f3a90a779e0.file | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | </ | ||
| + | |||
| + | |||
| + | |||
| + | Let's upload the image using the rbd-client.py tool (implemented by Lisa Zangrando, available at https:// | ||
| + | |||
| + | |||
| + | <code bash> | ||
| + | [root@blade-indigo-01 ~]# ./ | ||
| + | url: rbd:// | ||
| + | image size: 1360199680 | ||
| + | checksum: ae8862f78e585b65271984cf0f4b1c83 | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | </ | ||
| + | |||
| + | The script returns: | ||
| + | * the ceph URL location | ||
| + | * the size (check that it is the same reported by the glance image-show command) | ||
| + | * the checksum (check that it is the same reported by the glance image-show command) | ||
| + | |||
| + | |||
| + | Let's add the returned rbd location to the original image, and let's remove the file location: | ||
| + | |||
| + | <code bash> | ||
| + | |||
| + | [root@blade-indigo-01 ~]# glance location-add --url rbd:// | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | Property | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | checksum | ||
| + | | container_format | bare | | ||
| + | | created_at | ||
| + | | direct_url | ||
| + | | disk_format | ||
| + | | file | / | ||
| + | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | | locations | ||
| + | | | " | ||
| + | | | / | ||
| + | | min_disk | ||
| + | | min_ram | ||
| + | | name | c7-qcow2-glutser | ||
| + | | owner | a0480526a8e749cc89d7782b4aaba279 | ||
| + | | protected | ||
| + | | schema | ||
| + | | size | 1360199680 | ||
| + | | status | ||
| + | | tags | [] | | ||
| + | | updated_at | ||
| + | | virtual_size | ||
| + | | visibility | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | |||
| + | |||
| + | [root@blade-indigo-01 ~]# glance location-delete --url file:/// | ||
| + | [root@blade-indigo-01 ~]# | ||
| + | [root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | Property | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | | checksum | ||
| + | | container_format | bare | | ||
| + | | created_at | ||
| + | | direct_url | ||
| + | | | 90e9-4448-8f46-6f3a90a779e0/ | ||
| + | | disk_format | ||
| + | | id | c1f096cf-90e9-4448-8f46-6f3a90a779e0 | ||
| + | | locations | ||
| + | | | 90e9-4448-8f46-6f3a90a779e0/ | ||
| + | | min_disk | ||
| + | | min_ram | ||
| + | | name | c7-qcow2-glutser | ||
| + | | owner | a0480526a8e749cc89d7782b4aaba279 | ||
| + | | protected | ||
| + | | size | 1360199680 | ||
| + | | status | ||
| + | | tags | [] | | ||
| + | | updated_at | ||
| + | | virtual_size | ||
| + | | visibility | ||
| + | +------------------+----------------------------------------------------------------------------------+ | ||
| + | |||
| + | |||
| + | |||
| + | </ | ||
| + | |||
