User Tools

Site Tools


progetti:cloud-areapd:ceph:integration_with_openstack_services

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
progetti:cloud-areapd:ceph:integration_with_openstack_services [2017/03/24 14:38] – [Configure ceph as additional backend for cinder] sgaravat@infn.itprogetti:cloud-areapd:ceph:integration_with_openstack_services [2017/06/19 14:51] (current) – [Configure ceph as additional backend for glance] segatta@infn.it
Line 1: Line 1:
 +======Integration with OpenStack services======
 +
 +//**Instructions to integrate Ceph with OpenStack (mitaka version)**//
 +
 +==== Install ceph on OpenStack nodes ====
 +
 +The following instructions need to be done:
 +
 +   * On the controller nodes
 +   * On all the compute nodes (if ceph will be integrated with Cinder) or only on the compute nodes that will use ceph for the nova ephemeral storage
 +
 +<code bash>
 +rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm
 +yum clean all
 +yum install ceph-common
 +scp <ceph-node>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
 +</code>
 +
 +
 +
 +==== Create ceph pools and ceph users ====
 +
 +Create ceph pools:
 +
 +<code bash>
 +ceph osd pool create volumes-test 128 128
 +ceph osd pool create images-test 128 128
 +ceph osd pool create backups-test 128 128
 +ceph osd pool create vms-test 128 128
 +</code>
 +
 +Create ceph users:
 +
 +
 +<code bash>
 +ceph auth get-or-create client.cinder-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes-test, allow rwx pool=vms-test, allow rwx pool=images-test' -o /etc/ceph/ceph.client.cinder-test.keyring
 +
 +
 +ceph auth get-or-create client.glance-test mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images-test' -o /etc/ceph/ceph.client.glance-test.keyring
 +</code>
 +
 +==== Configure libvirt on compute nodes ====
 +
 +On a ceph node extracts on a file the key for the cinder ceph user:
 +
 +<code bash>
 +ceph auth get-key client.cinder-test -o client.cinder-test.key
 +</code>
 +
 +Create a ''secret.xml'' file as in the following example
 +
 +<code bash>
 +[root@ceph-mon-01 ~]# uuidgen
 +c5466c9f-dce2-420f-ada7-c4aa97b58a58
 +[root@ceph-mon-01 ~]# cat > secret.xml <<EOF
 +> <secret ephemeral='no' private='no'>
 +>   <uuid>c5466c9f-dce2-420f-ada7-c4aa97b58a58</uuid>
 +>   <usage type='ceph'>
 +>     <name>client.cinder-test secret</name>
 +> </usage>
 +> </secret>
 +> EOF
 +
 +</code>
 +
 +The instructions reported below are needed:
 +
 +   * on all compute nodes if cinder will be integrated with ceph
 +   * on the compute nodes that will use ceph for the nova ephemeral storage
 +
 +
 +First of all copy the ''client.cinder-test.key'' and ''secret.xml'' file on the compute node.
 +
 +Then issue the following commands:
 +
 +
 +<code bash>
 +# virsh secret-define --file secret.xml
 +Secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 created
 +
 +# virsh secret-set-value --secret c5466c9f-dce2-420f-ada7-c4aa97b58a58 --base64 $(cat client.cinder-test.key) 
 +Secret value set
 +
 +# rm client.cinder-test.key secret.xml
 +
 +</code>
 +
 +On all compute nodes (if ceph has been enabled for cinder) or just on all compute nodes when ceph is used for the ephemeral nova storage, edit ''/etc/nova/nova.conf'' adding in the [libvirt] section:
 +
 +<code>
 +rbd_user = cinder-test
 +rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 
 +</code>
 +==== Configure ceph as additional backend for cinder ====
 +
 +**//Starting point: a CentOS mitaka Cloud using gluster as backend for Gluster.
 +The following instructions explain how to add another backend using ceph//**
 +
 +
 +On the controller/storage node edit ''/etc/ceph/ceph.conf'' adding in the ''global'' section the following lines:
 +
 +<code>
 +rbd default format = 2
 +rbd default features = 3
 +</code>
 +
 +This is needed to disable some features not implemented in the centos7 default kernel (otherwise the attachment doesn't work)
 +
 +Copy the ''/etc/ceph/ceph.client.cinder-test.keyring'' generated above (on a ceph node) to the controller/storage nodes, and set the proper ownership and mode:
 + 
 +<code bash>
 +chown cinder.cinder /etc/ceph/ceph.client.cinder-test.keyring
 +chmod 0640 /etc/ceph/ceph.client.cinder-test.keyring
 +</code>
 +
 +
 +On the controller/storage node you need to do some changes to the ''/etc/cinder/cinder.conf'', to:
 +
 +   * to enable the ceph backend (attribute ''enabled_backends'')
 +   * to set ceph as default volume type (attribute ''default_volume_type'')
 +   * to create two sections for the gluster and ceph volume types
 +      * the attributes ''volume_drive'', ''glusterfs_shares_config'' and ''nas_volume_prov_type'' will need to be moved from the DEFAULT section
 +
 +<code>
 +
 +[DEFAULT]
 +
 +enabled_backends=ceph,gluster
 +default_volume_type=ceph
 +
 +[gluster]
 +volume_group=gluster
 +volume_backend_name=gluster
 +volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver 
 +glusterfs_shares_config = /etc/cinder/shares 
 +nas_volume_prov_type = thin 
 +
 +[ceph]
 +volume_group=ceph
 +volume_backend_name=ceph
 +volume_driver = cinder.volume.drivers.rbd.RBDDriver
 +rbd_pool = volumes-test
 +rbd_ceph_conf = /etc/ceph/ceph.conf
 +rbd_flatten_volume_from_snapshot = false
 +rbd_max_clone_depth = 5
 +rbd_store_chunk_size = 4
 +rados_connect_timeout = -1
 +glance_api_version = 2
 +rbd_user = cinder-test
 +rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 #uuid set in the secret.xml file (see above)
 +</code>
 +
 +Restart the cinder services on the controller/storage nodes:
 +
 +<code bash> 
 +# systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service
 +</code>
 +
 +
 +Disable the "old" cinder-volume services:
 +
 +<code bash>
 +
 + [root@blade-indigo-01 ~]# cinder service-list
 ++------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+
 +|      Binary      |                   Host                   | Zone |  Status | State |         Updated_at         | Disabled Reason |
 ++------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+
 +| cinder-scheduler |     blade-indigo-01.cloud.pd.infn.it     | nova | enabled |   up  | 2017-03-22T16:29:37.000000 |        -        |
 +|  cinder-volume   |     blade-indigo-01.cloud.pd.infn.it     | nova | enabled |  down | 2017-03-21T14:03:40.000000 |        -        |
 +|  cinder-volume   |  blade-indigo-01.cloud.pd.infn.it@ceph   | nova | enabled |   up  | 2017-03-22T16:29:39.000000 |        -        |
 +|  cinder-volume   | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled |   up  | 2017-03-22T16:29:39.000000 |        -        |
 ++------------------+------------------------------------------+------+---------+-------+----------------------------+-----------------+
 +[root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it cinder-volume
 ++----------------------------------+---------------+----------+
 +|               Host               |     Binary    |  Status  |
 ++----------------------------------+---------------+----------+
 +| blade-indigo-01.cloud.pd.infn.it | cinder-volume | disabled |
 ++----------------------------------+---------------+----------+
 +[root@blade-indigo-01 ~]# cinder service-list
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +|      Binary      |                   Host                   | Zone |  Status  | State |         Updated_at         | Disabled Reason |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +| cinder-scheduler |     blade-indigo-01.cloud.pd.infn.it     | nova | enabled  |   up  | 2017-03-22T16:34:57.000000 |        -        |
 +|  cinder-volume   |     blade-indigo-01.cloud.pd.infn.it     | nova | disabled |  down | 2017-03-22T16:34:57.000000 |        -        |
 +|  cinder-volume   |  blade-indigo-01.cloud.pd.infn.it@ceph   | nova | enabled  |   up  | 2017-03-22T16:34:59.000000 |        -        |
 +|  cinder-volume   | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled  |   up  | 2017-03-22T16:34:59.000000 |        -        |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +[root@blade-indigo-01 ~]# 
 +</code>
 +
 +Create the volume types:
 +
 +<code bash>
 +# cinder type-create gluster
 +# cinder type-key gluster set volume_backend_name=gluster
 +# cinder type-create ceph
 +# cinder type-key ceph set volume_backend_name=ceph
 +</code>
 +
 +For the volumes created before this change, there will be problems attaching/detaching them unless some changes are node in the database, as explained in http://dischord.org/2015/12/22/cinder-multi-backend-with-multiple-ceph-pools/
 +
 +Example of volume created before the change:
 +
 +<code bash>
 +
 +MariaDB [cinder]> select host from volumes where id='3af86ecd-40a8-4302-a68d-191c57f4a493';
 ++--------------------------------------------+
 +| host                                       |
 ++--------------------------------------------+
 +| blade-indigo-01.cloud.pd.infn.it#GlusterFS |
 ++--------------------------------------------+
 +1 row in set (0.00 sec)
 +</code>
 +
 +It is necessary to change the host (that must be ''<controller>@gluster#gluster''):
 +
 +<code bash>
 +
 +MariaDB [cinder]> update volumes set host='blade-indigo-01.cloud.pd.infn.it@gluster#gluster' where id='3af86ecd-40a8-4302-a68d-191c57f4a493';
 +Query OK, 1 row affected (0.03 sec)
 +Rows matched: 1  Changed: 1  Warnings: 0
 +
 +MariaDB [cinder]> select host from volumes where id='3af86ecd-40a8-4302-a68d-191c57f4a493';
 ++--------------------------------------------------+
 +| host                                             |
 ++--------------------------------------------------+
 +| blade-indigo-01.cloud.pd.infn.it@gluster#gluster |
 ++--------------------------------------------------+
 +1 row in set (0.00 sec)
 +
 +MariaDB [cinder]> 
 +</code>
 +
 +If you want to prevent the creation of gluster volumes, "hide" the gluster type:
 +
 +<code bash>
 +[root@blade-indigo-01 ~]# cinder type-list
 ++--------------------------------------+---------+-------------+-----------+
 +|                  ID                  |   Name  | Description | Is_Public |
 ++--------------------------------------+---------+-------------+-----------+
 +| 10f3f44b-7b31-4291-9566-7c0d23075be9 |   ceph  |      -      |    True   |
 +| abc5bd48-3259-486d-b604-e36976c00699 | gluster |      -      |    True   |
 ++--------------------------------------+---------+-------------+-----------+
 +[root@blade-indigo-01 ~]# cinder type-update --is-public false abc5bd48-3259-486d-b604-e36976c00699
 ++--------------------------------------+---------+-------------+-----------+
 +|                  ID                  |   Name  | Description | Is_Public |
 ++--------------------------------------+---------+-------------+-----------+
 +| abc5bd48-3259-486d-b604-e36976c00699 | gluster |      -      |   False   |
 ++--------------------------------------+---------+-------------+-----------+
 +</code>
 +
 +and disable the relevant service:
 +
 +<code bash>
 +[root@blade-indigo-01 ~]# cinder service-list
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +|      Binary      |                   Host                   | Zone |  Status  | State |         Updated_at         | Disabled Reason |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +| cinder-scheduler |     blade-indigo-01.cloud.pd.infn.it     | nova | enabled  |   up  | 2017-03-24T14:51:41.000000 |        -        |
 +|  cinder-volume       blade-indigo-01.cloud.pd.infn.it     | nova | disabled |  down | 2017-03-22T16:34:57.000000 |        -        |
 +|  cinder-volume    blade-indigo-01.cloud.pd.infn.it@ceph   | nova | enabled  |   up  | 2017-03-24T14:51:41.000000 |        -        |
 +|  cinder-volume   | blade-indigo-01.cloud.pd.infn.it@gluster | nova | enabled  |   up  | 2017-03-24T14:51:41.000000 |        -        |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +[root@blade-indigo-01 ~]# cinder service-disable blade-indigo-01.cloud.pd.infn.it@gluster cinder-volume
 ++------------------------------------------+---------------+----------+
 +|                   Host                       Binary    |  Status  |
 ++------------------------------------------+---------------+----------+
 +| blade-indigo-01.cloud.pd.infn.it@gluster | cinder-volume | disabled |
 ++------------------------------------------+---------------+----------+
 +[root@blade-indigo-01 ~]# cinder service-list
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +|      Binary      |                   Host                   | Zone |  Status  | State |         Updated_at         | Disabled Reason |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +| cinder-scheduler |     blade-indigo-01.cloud.pd.infn.it     | nova | enabled  |   up  | 2017-03-24T14:52:01.000000 |        -        |
 +|  cinder-volume       blade-indigo-01.cloud.pd.infn.it     | nova | disabled |  down | 2017-03-22T16:34:57.000000 |        -        |
 +|  cinder-volume    blade-indigo-01.cloud.pd.infn.it@ceph   | nova | enabled  |   up  | 2017-03-24T14:52:01.000000 |        -        |
 +|  cinder-volume   | blade-indigo-01.cloud.pd.infn.it@gluster | nova | disabled |   up  | 2017-03-24T14:52:01.000000 |        -        |
 ++------------------+------------------------------------------+------+----------+-------+----------------------------+-----------------+
 +[root@blade-indigo-01 ~]# 
 +</code>
 +
 +==== Configure ceph as ephemeral storage for nova on a compute node ====
 +
 +
 +
 +Copy the ''/etc/ceph/ceph.client.cinder-test.keyring'' generated above (on a ceph node) to the compute node, and set the proper ownership and mode:
 + 
 +<code bash>
 +chown nova.nova /etc/ceph/ceph.client.cinder-test.keyring
 +chmod 0640 /etc/ceph/ceph.client.cinder-test.keyring
 +</code>
 +
 +Modify nova.conf on the compute node:
 +
 +<code>
 +
 +[libvirt]
 +images_type = rbd
 +images_rbd_pool = vms-test
 +images_rbd_ceph_conf = /etc/ceph/ceph.conf
 +rbd_user = cinder-test
 +rbd_secret_uuid = c5466c9f-dce2-420f-ada7-c4aa97b58a58 # the uuid used in the secret.xml file
 +disk_cachemodes="network=writeback"
 +live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
 +</code>
 +
 +Restart the nova-compute service:
 +
 +<code bash>
 +systemctl restart openstack-nova-compute.service
 +</code>
 +
 +
 +==== Configure ceph as additional backend for glance ====
 +
 +Copy the file ==/etc/ceph/ceph.client.glance-test.keyring== (generate above on a ceph node) to the controller node and set the proper ownership/mode:
 +
 +<code bash>
 +chown glance.glance /etc/ceph/ceph.client.glance-test.keyring
 +chmod 0640 /etc/ceph/ceph.client.glance-test.keyring
 +</code>
 +
 +Modify on the controller node the ''glance-api.conf'' node in the following way:
 +
 +<code>
 +{DEFAULT]
 +...
 +show_multiple_locations = True
 +show_image_direct_url = True
 +...
 +...
 +[glance_store]
 +#stores = file,http                                                                                                                                          
 +stores = file,http,rbd
 +#default_store = file                                                                                                                                        
 +default_store = rbd
 +rbd_store_pool = images-test
 +rbd_store_user = glance-test
 +rbd_store_ceph_conf = /etc/ceph/ceph.conf
 +rbd_store_chunk_size = 8
 +</code>
 +
 +Restart the glance services:
 +
 +<code>
 +systemctl restart openstack-glance-api.service   openstack-glance-registry.service
 +</code>
 +
 +New images will be written in ceph (because of the ''default_store'' setting). Images on gluster will still be usable.
 +
 +
 +==== Move cinder volumes from gluster to ceph ====
 +
 +To move a cinder volume from the gluster backend to the ceph backend the idea is to:
 +
 +   * create a new volume in the ceph backend
 +   * copy the content of the volume (using rsync or any other tool)
 +   * remove the original volume
 +
 +==== Move glance images from file (gluster) backend to the ceph backend ====
 +
 +Let's consider an image that was created before adding the ceph backend and which therefore is stored in the file (gluster) backend.
 +
 +Let's suppose that we want to move the image on ceph.
 +
 +We can't simply delete and re-register the image since its id would change, and this would be a problem for instances created using that image
 +
 +Let's consider that this is the image
 +<code bash>
 +[root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0
 ++------------------+----------------------------------------------------------------------------------+
 +| Property         | Value                                                                            |
 ++------------------+----------------------------------------------------------------------------------+
 +| checksum         | ae8862f78e585b65271984cf0f4b1c83                                                 |
 +| container_format | bare                                                                             |
 +| created_at       | 2017-04-14T14:48:18Z                                                             |
 +| direct_url       | file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0               |
 +| disk_format      | qcow2                                                                            |
 +| id               | c1f096cf-90e9-4448-8f46-6f3a90a779e0                                             |
 +| locations        | [{"url": "file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0",   |
 +|                  | "metadata": {}}]                                                                 |
 +| min_disk         | 0                                                                                |
 +| min_ram          | 0                                                                                |
 +| name             | c7-qcow2-glutser                                                                 |
 +| owner            | a0480526a8e749cc89d7782b4aaba279                                                 |
 +| protected        | False                                                                            |
 +| size             | 1360199680                                                                       |
 +| status           | active                                                                           |
 +| tags             | []                                                                               |
 +| updated_at       | 2017-04-14T14:48:55Z                                                             |
 +| virtual_size     | None                                                                             |
 +| visibility       | public                                                                           |
 ++------------------+----------------------------------------------------------------------------------+
 +[root@blade-indigo-01 ~]# 
 +</code>
 +
 +Let's download the image.
 +
 +<code>
 +[root@blade-indigo-01 ~]# glance image-download --progress --file c1f096cf-90e9-4448-8f46-6f3a90a779e0.file c1f096cf-90e9-4448-8f46-6f3a90a779e0
 +[=============================>] 100%
 +[root@blade-indigo-01 ~]# 
 +[root@blade-indigo-01 ~]# ls -l c1f096cf-90e9-4448-8f46-6f3a90a779e0.file
 +-rw-r--r-- 1 root root 1360199680 Apr 14 17:26 c1f096cf-90e9-4448-8f46-6f3a90a779e0.file
 +[root@blade-indigo-01 ~]# 
 +</code>
 +
 +
 +
 +Let's upload the image using the rbd-client.py tool (implemented by Lisa Zangrando, available at https://github.com/CloudPadovana/GlanceFileToCeph/blob/master/rbd_client.py)
 +
 +
 +<code bash>
 +[root@blade-indigo-01 ~]# ./rbd_client.py -i c1f096cf-90e9-4448-8f46-6f3a90a779e0 -f c1f096cf-90e9-4448-8f46-6f3a90a779e0.file
 +url: rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap
 +image size: 1360199680
 +checksum: ae8862f78e585b65271984cf0f4b1c83
 +[root@blade-indigo-01 ~]# 
 +</code>
 +
 +The script returns:
 +   * the ceph URL location
 +   * the size (check that it is the same reported by the glance image-show command)
 +   * the checksum (check that it is the same reported by the glance image-show command)
 +
 +
 +Let's add the returned rbd location to the original image, and let's remove the file location:
 +
 +<code bash>
 +
 +[root@blade-indigo-01 ~]# glance location-add --url rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap c1f096cf-90e9-4448-8f46-6f3a90a779e0
 ++------------------+----------------------------------------------------------------------------------+
 +| Property         | Value                                                                            |
 ++------------------+----------------------------------------------------------------------------------+
 +| checksum         | ae8862f78e585b65271984cf0f4b1c83                                                 |
 +| container_format | bare                                                                             |
 +| created_at       | 2017-04-14T14:48:18Z                                                             |
 +| direct_url       | file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0               |
 +| disk_format      | qcow2                                                                            |
 +| file             | /v2/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0/file                             |
 +| id               | c1f096cf-90e9-4448-8f46-6f3a90a779e0                                             |
 +| locations        | [{"url": "file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0",   |
 +|                  | "metadata": {}}, {"url": "rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test |
 +|                  | /c1f096cf-90e9-4448-8f46-6f3a90a779e0/snap", "metadata": {}}]                    |
 +| min_disk         | 0                                                                                |
 +| min_ram          | 0                                                                                |
 +| name             | c7-qcow2-glutser                                                                 |
 +| owner            | a0480526a8e749cc89d7782b4aaba279                                                 |
 +| protected        | False                                                                            |
 +| schema           | /v2/schemas/image                                                                |
 +| size             | 1360199680                                                                       |
 +| status           | active                                                                           |
 +| tags             | []                                                                               |
 +| updated_at       | 2017-04-14T15:37:17Z                                                             |
 +| virtual_size     | None                                                                             |
 +| visibility       | public                                                                           |
 ++------------------+----------------------------------------------------------------------------------+
 +
 +
 +[root@blade-indigo-01 ~]# glance location-delete --url file:///var/lib/glance/images/c1f096cf-90e9-4448-8f46-6f3a90a779e0 c1f096cf-90e9-4448-8f46-6f3a90a779e0
 +[root@blade-indigo-01 ~]# 
 +[root@blade-indigo-01 ~]# glance image-show c1f096cf-90e9-4448-8f46-6f3a90a779e0
 ++------------------+----------------------------------------------------------------------------------+
 +| Property         | Value                                                                            |
 ++------------------+----------------------------------------------------------------------------------+
 +| checksum         | ae8862f78e585b65271984cf0f4b1c83                                                 |
 +| container_format | bare                                                                             |
 +| created_at       | 2017-04-14T14:48:18Z                                                             |
 +| direct_url       | rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-                 |
 +|                  | 90e9-4448-8f46-6f3a90a779e0/snap                                                 |
 +| disk_format      | qcow2                                                                            |
 +| id               | c1f096cf-90e9-4448-8f46-6f3a90a779e0                                             |
 +| locations        | [{"url": "rbd://8162f291-00b6-4b40-a8b4-1981a8c09b64/images-test/c1f096cf-       |
 +|                  | 90e9-4448-8f46-6f3a90a779e0/snap", "metadata": {}}]                              |
 +| min_disk         | 0                                                                                |
 +| min_ram          | 0                                                                                |
 +| name             | c7-qcow2-glutser                                                                 |
 +| owner            | a0480526a8e749cc89d7782b4aaba279                                                 |
 +| protected        | False                                                                            |
 +| size             | 1360199680                                                                       |
 +| status           | active                                                                           |
 +| tags             | []                                                                               |
 +| updated_at       | 2017-04-14T15:38:23Z                                                             |
 +| virtual_size     | None                                                                             |
 +| visibility       | public                                                                           |
 ++------------------+----------------------------------------------------------------------------------+
 +
 +
 +
 +</code>
 +
  

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki