Table of Contents
Installation and configuration of Ceilometer on Kilo
On each HAproxy server, add the following entry in /etc/haproxy/haproxy.cfg:
listen ceilometer-api-public bind 192.168.60.40:8777 balance source option tcpka option tcplog server cld-ctrl-01.cloud.pd.infn.it 192.168.60.105:8777 check inter 2000 rise 2 fall 3 server cld-ctrl-02.cloud.pd.infn.it 192.168.60.106:8777 check inter 2000 rise 2 fall 3
and then restart the haproxy process:
service haproxy restart
Installation and Configuration of clusterize MongoDB (Replicaset)
We assume to have 3 database nodes on which to install and clusterize MongoDB.
Software Install & Configuration (all database nodes)
yum -y install mongodb mongodb-server
Configure mongo to listen on management IP (make sure the $MYIP
env var contains what you expect before using it):
sed -i 's,^bind_ip,#bind_ip,' /etc/mongodb.conf MYIP=`hostname -i` cat << EOF >> /etc/mongodb.conf smallfiles = true bind_ip = $MYIP EOF
Now set the env var $DBPATH
to the actual path you want the database stores its files into:
export DBPATH=/var/lib/<SOME_MOUNTED_LARGE_FS>/mongodb
Now configure mongo to use $DBPATH
:
sed -i 's,^dbpath,#dbpath,' /etc/mongodb.conf cat << EOF >> /etc/mongodb.conf dbpath = $DBPATH EOF
Now configure mongo to use replica set:
sed -i 's,^replSet,#replSet,' /etc/mongodb.conf cat << EOF >> /etc/mongodb.conf replSet = rs0 EOF
Now start and enable the service:
service mongod start; chkconfig mongod on
Create the replica cluster (on one database node only)
Connect to the local mongo service with the "mongo
" command line:
mongo --host `hostname -i`
While inside the mongo shell execute the following commands to initiate and create the three-nodes cluster:
rs.initiate() rs.add('<X1.Y1.Z1>') rs.add('<X2.Y2.Z2>') rs.conf() { "_id" : "rs0", "version" : 3, "members" : [ { "_id" : 0, "host" : "<X0.Y0.Z0>:27017" }, { "_id" : 1, "host" : "<X1.Y1.Z1>:27017" }, { "_id" : 2, "host" : "<X2.Y2.Z2>:27017" } ] }
where X0.Y0.Z0
is the output of "hostname -i
" on current node, X1.Y1.Z1 and X2.Y2.Z2 are the output of "hostname -i
" on the other two database nodes.
Now check the cluster status:
rs.status() { "set" : "rs0", "date" : ISODate("2015-08-03T10:05:16Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "<X0.Y0.Z0>:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 339180, "optime" : Timestamp(1438596213, 400), "optimeDate" : ISODate("2015-08-03T10:03:33Z"), "self" : true }, { "_id" : 1, "name" : "<X1.Y1.Z1>:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 339132, "optime" : Timestamp(1438596213, 400), "optimeDate" : ISODate("2015-08-03T10:03:33Z"), "lastHeartbeat" : ISODate("2015-08-03T10:05:14Z"), "lastHeartbeatRecv" : ISODate("2015-08-03T10:05:14Z"), "pingMs" : 0, "syncingTo" : "<X0.Y0.Z0>:27017" }, { "_id" : 2, "name" : "<X2.Y2.Z2>:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 339128, "optime" : Timestamp(1438596213, 400), "optimeDate" : ISODate("2015-08-03T10:03:33Z"), "lastHeartbeat" : ISODate("2015-08-03T10:05:14Z"), "lastHeartbeatRecv" : ISODate("2015-08-03T10:05:14Z"), "pingMs" : 0, "syncingTo" : "<X0.Y0.Z0>:27017" } ], "ok" : 1 }
Check that "syncingTo
" parameters have the value of the "PRIMARY
" (the node where rs.initiate()
was executed).
Create the Ceilometer database
Log into one of the mongodb nodes and issue the command:
mongo --eval ' db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "<CEILOMETER_DB_PWD>", roles: [ "readWrite", "dbAdmin" ]})'
Se serve cambiare la password:
mongo --host `hostname -i` rs0:PRIMARY> use ceilometer switched to db ceilometer rs0:PRIMARY> show users { "_id" : ObjectId("55ba1e387fc11331ca8910e9"), "pwd" : "470cdf1113d5b166bad0a9e572741ca3", "roles" : [ "readWrite", "dbAdmin" ], "user" : "ceilometer" } db.changeUserPassword("ceilometer", "newpass")
Installation and configuration of first controller node
Apertura porta 8777:
firewall-cmd --add-port=8777/tcp firewall-cmd --permanent --add-port=8777/tcp systemctl restart firewalld
yum install openstack-ceilometer-api openstack-ceilometer-collector \
openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm \
python-ceilometerclient
openstack user create --password=CEILOMETER_PASS ceilometer openstack role add --project service --user ceilometer admin
openstack service create --name ceilometer \ --description "Telemetry" metering openstack endpoint create \ --publicurl http://$CONTROLLER_VIP_MGMT:8777 \ --internalurl http://$CONTROLLER_VIP_MGMT:8777 \ --adminurl http://$CONTROLLER_VIP_MGMT:8777 \ --region regionOne \ metering
MONGO_CLUSTER=192.168.60.250:27017,192.168.60.251:27017,192.168.60.252:27017 CEILOMETER_TOKEN=$(openssl rand -hex 10)
openstack-config --set /etc/ceilometer/ceilometer.conf database connection "mongodb://ceilometer:<CEILOMETER_DB_PWD>@$MONGO_CLUSTER/ceilometer" openstack-config --set /etc/ceilometer/ceilometer.conf database metering_time_to_live 2764800 openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT auth_strategy keystone openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://192.168.60.24:5000/v2.0 openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken identity_uri http://192.168.60.24:35357 openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password CEILOMETER_PASS openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://192.168.60.24:5000/v2.0 openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name service openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password CEILOMETER_PASS openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_endpoint_type internalURL openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_region_name regionOne openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_cacert /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret $CEILOMETER_TOKEN
systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
Cinder in telemetry:
openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messagingv2 systemctl restart openstack-cinder-api systemctl restart openstack-cinder-scheduler systemctl restart openstack-cinder-volume
Glance in telemetry:
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messagingv2 openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/glance/glance-api.conf DEFAULT rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver messagingv2 openstack-config --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/glance/glance-registry.conf DEFAULT rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 systemctl restart openstack-glance-api systemctl restart openstack-glance-registry
Installation and configuration of second controller node
Apertura porta 8777:
firewall-cmd --add-port=8777/tcp firewall-cmd --permanent --add-port=8777/tcp systemctl restart firewalld
yum install openstack-ceilometer-api openstack-ceilometer-collector \
openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm \
python-ceilometerclient
scp cld-ctrl-01:/etc/ceilometer/ceilometer.conf /etc/ceilometer/ceilometer.conf
systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
systemctl start openstack-ceilometer-api.service openstack-ceilometer-notification.service \
openstack-ceilometer-central.service openstack-ceilometer-collector.service \
openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
Cinder in telemetry:
openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder openstack-config --set /etc/cinder/cinder.conf DEFAULT notification_driver messagingv2 systemctl restart openstack-cinder-api systemctl restart openstack-cinder-scheduler systemctl restart openstack-cinder-volume
Glance in telemetry:
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver messagingv2 openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/glance/glance-api.conf DEFAULT rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver messagingv2 openstack-config --set /etc/glance/glance-registry.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/glance/glance-registry.conf DEFAULT rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 systemctl restart openstack-glance-api systemctl restart openstack-glance-registry
Installation and configuration on compute nodes
yum -y install openstack-ceilometer-compute python-ceilometerclient python-pecan
openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state openstack-config --set /etc/nova/nova.conf DEFAULT notification_driver messagingv2
Restart the Compute service:
systemctl restart openstack-nova-compute
CEIOLOMETER_TOKEN=<quello generato nel controller node> openstack-config --set /etc/ceilometer/ceilometer.conf publisher telemetry_secret $CEILOMETER_TOKEN openstack-config --set /etc/ceilometer/ceilometer.conf DEFAULT rpc_backend rabbit openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_hosts 192.168.60.250:5672,192.168.60.251:5672,192.168.60.252:5672 openstack-config --set /etc/ceilometer/ceilometer.conf oslo_messaging_rabbit rabbit_ha_queues True openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken auth_uri http://192.168.60.24:5000/v2.0 openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken identity_uri http://192.168.60.24:35357 openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_tenant_name service openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_user ceilometer openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken admin_password CEILOMETER_PASS openstack-config --set /etc/ceilometer/ceilometer.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA.pem openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_auth_url http://192.168.60.24:5000/v2.0 openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_username ceilometer openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_tenant_name service openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_password CEILOMETER_PASS openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_endpoint_type internalURL openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_region_name regionOne openstack-config --set /etc/ceilometer/ceilometer.conf service_credentials os_cacert /etc/grid-security/certificates/INFN-CA.pem
systemctl start openstack-ceilometer-compute
systemctl enable openstack-ceilometer-compute
Disabling ceilometer (Resource Usage) tab in Dashboard
If you want to disable the ceilometer (Resource Usage) in the dashboard, in the two controllers create the file /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/_99_disable_metering_dashboard.py with the following content:
# The slug of the panel to be added to HORIZON_CONFIG. Required. PANEL = 'metering' # The slug of the dashboard the PANEL associated with. Required. PANEL_DASHBOARD = 'admin' # The slug of the panel group the PANEL is associated with. PANEL_GROUP = 'admin' REMOVE_PANEL = True
Then:
systemctl restart httpd
Using correct env variable to make the ceilometer client talking with the server
The ceilometer doesn't speak yet with v3 keystone interface. The workaround is to source the following script:
unset OS_PROJECT_DOMAIN_ID unset OS_USER_DOMAIN_ID export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=https://cloud-areapd.pd.infn.it:35357/v2.0 export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2015.pem