Authors:
We assume that the controller nodes have the following setup:
192.168.60.0/24
cld-blu-03.cloud.pd.infn.it
(192.168.60.152
)cld-blu-04.cloud.pd.infn.it
(192.168.60.153
)Two nodes with:
[root@cld-blu-03 ~]# grep ENA /etc/sysconfig/yum-autoupdate # ENABLED ENABLED="false"
/var/lib/glance/images
(which must be already created) where to store the machines images; note that this storage must be shared between the two controller nodes in order to make both glance instances working correctly. This applies to both controller nodes./etc/selinux/config
)192.168.60.180
)192.168.60.180
for mgmt net and 90.147.143.10
for public net)[root@cld-blu-05 ~]# ll /etc/grid-security/ total 8 -rw-r--r-- 1 root root 1476 May 6 16:59 hostcert.pem -rw------- 1 root root 916 May 6 16:59 hostkey.pem
[root@cld-blu-03 ~]# ll /etc/grid-security/chain.pem -rw-r--r--. 1 root root 1257 Mar 24 04:17 /etc/grid-security/chain.pem
Execute the following commands on both controller nodes (cld-blu-03 and cld-blu-04):
# allow traffic toward rabbitmq server firewall-cmd --permanent --add-port=5672/tcp firewall-cmd --permanent --add-port=4369/tcp firewall-cmd --permanent --add-port=35197/tcp firewall-cmd --permanent --add-port=9100-9110/tcp # allow traffic toward keystone firewall-cmd --permanent --add-port=5000/tcp --add-port=35357/tcp # allow traffic to glance-api firewall-cmd --permanent --add-port=9292/tcp # allow traffic to glance-registry firewall-cmd --permanent --add-port=9191/tcp # allow traffic to Nova EC2 API firewall-cmd --permanent --add-port=8773/tcp # allow traffic to Nova API firewall-cmd --permanent --add-port=8774/tcp # allow traffic to Nova Metadata server firewall-cmd --permanent --add-port=8775/tcp # allow traffic to Nova VNC proxy firewall-cmd --permanent --add-port=6080/tcp # allow traffic to Neutron Server firewall-cmd --permanent --add-port=9696/tcp # allow traffic to Dashboard firewall-cmd --permanent --add-port=80/tcp --add-port=443/tcp # allow traffic to memcached firewall-cmd --permanent --add-port=11211/tcp # allow traffic to Cinder API firewall-cmd --permanent --add-port=3260/tcp --add-port=8776/tcp # permit ntpd's udp communications firewall-cmd --permanent --add-port=123/udp firewall-cmd --reload
WARNING: firewall-cmd –reload
is a destructive command in regard to any temporary added rules (i.e. those added without –permanent
).
It is used here only because:
permanent
directive are not immediately activeIn the subsequent configurations rules are added by opening ports with the pair
firewall-cmd –add-port
firewall-cmd –permanent –add-port
Also the HAProxy nodes (cld-blu-05, cld-blu-06 and cld-blu-07) must allow traffic through the same TCP ports:
while read i do firewall-cmd --add-port=${i}/tcp firewall-cmd --permanent --add-port=${i}/tcp done << EOF 5672 4369 35197 9100-9110 5000 35357 9292 9191 8773 8774 8775 6080 8776 9696 80 12111 443 8080 8004 8000 EOF firewall-cmd --add-port=123/udp firewall-cmd --permanent --add-port=123/udp
The HAProxy nodes run the haproxy and keepalived daemons. HAProxy redirects connection from the external world to the controller nodes (users who want to connect to glance/nova/neutron/etc.). Keepalived is responsible to group the HAProxy nodes (3 in our infrastructure) in order to make them highly available with a Virtual Public IP (VIP).
This guide will assume (as mentioned above) that HAProxy has been already configured for the MySQL cluster. Only the additional part for OpenStack is shown here.
Log into the HAProxy node(s) and put the following lines in /etc/haproxy/haproxy.cfg
:
global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 uid 188 gid 188 daemon tune.ssl.default-dh-param 4096 tune.maxrewrite 65536 tune.bufsize 65536 defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 8000 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s listen mysql-cluster-one bind 192.168.60.180:3306 mode tcp balance leastconn option httpchk default-server on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions server cld-blu-08.cloud.pd.infn.it 192.168.60.157:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-09.cloud.pd.infn.it 192.168.60.158:3306 check port 9200 inter 12000 rise 3 fall 3 backup server cld-blu-10.cloud.pd.infn.it 192.168.60.159:3306 check port 9200 inter 12000 rise 3 fall 3 backup listen mysql-cluster-two bind 192.168.60.180:4306 mode tcp balance leastconn option httpchk default-server on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions server cld-blu-08.cloud.pd.infn.it 192.168.60.157:3306 check port 9200 inter 12000 rise 3 fall 3 backup server cld-blu-09.cloud.pd.infn.it 192.168.60.158:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-10.cloud.pd.infn.it 192.168.60.159:3306 check port 9200 inter 12000 rise 3 fall 3 backup listen mysql-cluster-three bind 192.168.60.180:5306 mode tcp balance leastconn option httpchk default-server on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions server cld-blu-08.cloud.pd.infn.it 192.168.60.157:3306 check port 9200 inter 12000 rise 3 fall 3 backup server cld-blu-09.cloud.pd.infn.it 192.168.60.158:3306 check port 9200 inter 12000 rise 3 fall 3 backup server cld-blu-10.cloud.pd.infn.it 192.168.60.159:3306 check port 9200 inter 12000 rise 3 fall 3 listen dashboard_public_ssl bind 90.147.143.10:443 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:443 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:443 check inter 2000 rise 2 fall 3 listen dashboard_public bind 90.147.143.10:80 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:80 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:80 check inter 2000 rise 2 fall 3 listen vnc bind 192.168.60.180:6080 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:6080 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:6080 check inter 2000 rise 2 fall 3 listen vnc_public bind 90.147.143.10:6080 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:6080 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:6080 check inter 2000 rise 2 fall 3 listen keystone_auth_public bind 90.147.143.10:35357 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:35357 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:35357 check inter 2000 rise 2 fall 3 listen keystone_api_public bind 90.147.143.10:5000 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:5000 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:5000 check inter 2000 rise 2 fall 3 listen keystone_auth bind 192.168.60.180:35357 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:35357 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:35357 check inter 2000 rise 2 fall 3 listen keystone_api bind 192.168.60.180:5000 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:5000 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:5000 check inter 2000 rise 2 fall 3 listen glance_api bind 192.168.60.180:9292 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9292 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9292 check inter 2000 rise 2 fall 3 listen glance_api_public bind 90.147.143.10:9292 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9292 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9292 check inter 2000 rise 2 fall 3 listen glance_registry bind 192.168.60.180:9191 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9191 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9191 check inter 2000 rise 2 fall 3 listen novaec2-api bind 192.168.60.180:8773 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8773 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8773 check inter 2000 rise 2 fall 3 listen novaec2-api_public bind 90.147.143.10:8773 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8773 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8773 check inter 2000 rise 2 fall 3 listen nova-api bind 192.168.60.180:8774 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8774 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8774 check inter 2000 rise 2 fall 3 listen nova-api_public bind 90.147.143.10:8774 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8774 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8774 check inter 2000 rise 2 fall 3 listen nova-metadata bind 192.168.60.180:8775 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8775 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8775 check inter 2000 rise 2 fall 3 listen nova-metadata_public bind 90.147.143.10:8775 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8775 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8775 check inter 2000 rise 2 fall 3 listen cinder-api_public bind 90.147.143.10:8776 balance source option tcpka option tcplog option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8776 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8776 check inter 2000 rise 2 fall 3 listen neutron-server bind 192.168.60.180:9696 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9696 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9696 check inter 2000 rise 2 fall 3 listen neutron-server_public bind 90.147.143.10:9696 balance source option tcpka option httpchk option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9696 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9696 check inter 2000 rise 2 fall 3 listen rabbitmq-server bind 192.168.60.180:5672 balance roundrobin mode tcp server cld-blu-03.cloud.pd.infn.it 192.168.60.152:5672 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:5672 check inter 2000 rise 2 fall 3 listen epmd bind 192.168.60.180:4369 balance roundrobin server cld-blu-03.cloud.pd.infn.it 192.168.60.152:4369 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:4369 check inter 2000 rise 2 fall 3 listen memcached_cluster bind 192.168.60.180:11211 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:11211 check inter 2000 rise 2 fall 5 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:11211 check inter 2000 rise 2 fall 5
Check the syntax of the file you've just modifed:
[root@cld-blu-05 ~]# haproxy -c -f /etc/haproxy/haproxy.cfg Configuration file is valid
To enable logging of the haproxy traffic allow the rsyslog service on cld-blu-05, cld-blu-06 and cld-blu-07 to accept udp connections from the haproxy daemons. In /etc/rsyslog.conf
uncomment and modify as follows:
# Provides UDP syslog reception $ModLoad imudp $UDPServerAddress 127.0.0.1 $UDPServerRun 514
Then add the specific haproxy targets by issuing the command:
cat << EOF >> /etc/rsyslog.d/haproxy.conf # Save haproxy messages also to haproxy.log local0.*;local1.* /var/log/haproxy.log EOF
Setup logrotate for haproxy.log:
cat << EOF >> /etc/logrotate.d/haproxy compress /var/log/haproxy.log { weekly rotate 4 missingok compress minsize 100k } EOF
Restart HAProxy and syslog:
systemctl restart rsyslog systemctl restart haproxy
Login into the MySQL node.
Remove previously created users and databases, if any:
mysql -u root DROP DATABASE IF EXISTS keystone; DROP DATABASE IF EXISTS glance; DROP DATABASE IF EXISTS nova; DROP DATABASE IF EXISTS neutron; DROP DATABASE IF EXISTS cinder; /* Following commands will raise errors if users are nonexistent. 'drop user if exists' is not implemented in MySQL. http://bugs.mysql.com/bug.php?id=19166 */ DROP USER 'keystone'@'localhost'; DROP USER 'keystone'@'192.168.60.%'; DROP USER 'glance'@'localhost'; DROP USER 'glance'@'192.168.60.%'; DROP USER 'nova'@'localhost'; DROP USER 'nova'@'192.168.60.%'; DROP USER 'neutron'@'localhost'; DROP USER 'neutron'@'192.168.60.%'; DROP USER 'cinder'@'192.168.60.%'; DROP USER 'cinder'@'localhost'; FLUSH privileges; quit
Create database and grant users:
mysql -u root CREATE DATABASE keystone; GRANT ALL ON keystone.* TO 'keystone'@'192.168.60.%' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; CREATE DATABASE glance; GRANT ALL ON glance.* TO 'glance'@'192.168.60.%' IDENTIFIED BY '<GLANCE_DB_PWD>'; GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '<GLANCE_DB_PWD>'; CREATE DATABASE nova; GRANT ALL ON nova.* TO 'nova'@'192.168.60.%' IDENTIFIED BY '<NOVA_DB_PWD>'; GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '<NOVA_DB_PWD>'; CREATE DATABASE neutron; GRANT ALL ON neutron.* TO 'neutron'@'192.168.60.%' IDENTIFIED BY '<NEUTRON_DB_PWD>'; GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '<NEUTRON_DB_PWD>'; CREATE DATABASE cinder; GRANT ALL ON cinder.* TO 'cinder'@'192.168.60.%' IDENTIFIED BY '<CINDER_DB_PWD>'; GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '<CINDER_DB_PWD>'; FLUSH PRIVILEGES; commit; quit
Logout from MySQL node.
First install the YUM repo from RDO:
yum install -y https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
Install the packages for Keystone, Glance, Nova, Neutron and Horizon(Dashboard):
yum install -y openstack-keystone python-keystoneclient openstack-utils \ openstack-nova python-novaclient rabbitmq-server openstack-glance \ python-kombu python-anyjson python-amqplib openstack-neutron \ python-neutron python-neutronclient openstack-neutron-openvswitch mariadb \ memcached python-memcached mod_wsgi openstack-dashboard \ openstack-cinder openstack-utils mod_ssl openstack-neutron-ml2
Apply a workaround to a known bug (see this page for more info):
openstack-config --set /etc/keystone/keystone.conf token expiration 32400
Proceed with Keystone setup:
export SERVICE_TOKEN=$(openssl rand -hex 10) echo $SERVICE_TOKEN > ~/ks_admin_token openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN openstack-config --set /etc/keystone/keystone.conf sql connection "mysql://keystone:<KEYSTONE_DB_PWD>@192.168.60.180:3306/keystone" openstack-config --set /etc/keystone/keystone.conf DEFAULT bind_host 0.0.0.0 keystone-manage pki_setup --keystone-user keystone --keystone-group keystone chown -R keystone:keystone /var/log/keystone /etc/keystone/ssl/ su keystone -s /bin/sh -c "keystone-manage db_sync"
Start Keystone:
systemctl start openstack-keystone
systemctl enable openstack-keystone
Get access to Keystone and create the admin user and tenant:
export OS_SERVICE_TOKEN=`cat ~/ks_admin_token` export OS_SERVICE_ENDPOINT=http://192.168.60.180:35357/v2.0 keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
The system will respond something like:
+-------------+----------------------------------+ | Property | VALUE | +-------------+----------------------------------+ | description | Keystone IDENTITY Service | | enabled | TRUE | | id | 5363ecce39614aefa80ce8c2f9404691 | | name | keystone | | TYPE | IDENTITY | +-------------+----------------------------------+
Subsequent output from the system are suppressed.
keystone endpoint-create --service keystone --publicurl http://90.147.143.10:5000/v2.0 --adminurl http://90.147.143.10:35357/v2.0 --internalurl http://192.168.60.180:5000/v2.0 keystone user-create --name admin --pass ADMIN_PASS keystone role-create --name admin keystone tenant-create --name admin keystone role-create --name Member keystone user-role-add --user admin --role admin --tenant admin \rm -f $HOME/keystone_admin.sh echo "export OS_USERNAME=admin" > $HOME/keystone_admin.sh echo "export OS_TENANT_NAME=admin" >> $HOME/keystone_admin.sh echo "export OS_PASSWORD=ADMIN_PASS" >> $HOME/keystone_admin.sh echo "export OS_AUTH_URL=http://90.147.143.10:5000/v2.0/" >> $HOME/keystone_admin.sh keystone tenant-create --name services --description "Services Tenant"
In order to check that the Keystone service is well installed, copy the keystone_admin.sh
script you've just created to another machine, even your desktop. Install on it the Python Keystone's command line (yum -y install python-keystoneclient
); then source the script keystone_admin.sh
and try the command:
$ keystone user-list +----------------------------------+-------+---------+-------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------+ | 60aa8974cf4d4736b28b04ffa52492ab | admin | True | | +----------------------------------+-------+---------+-------+
It's better to do this on both controller nodes.
See origin of the problem here.
Create the file /usr/local/bin/keystone_token_flush.sh
:
cat << EOF >> /usr/local/bin/keystone_token_flush.sh #!/bin/sh logger -t keystone-cleaner "Starting token cleanup" /usr/bin/keystone-manage -v -d token_flush logger -t keystone-cleaner "Ending token cleanup" EOF
Since the openstack-keystone package rotates all logs in /var/log/keystone/
there is no need to configure the logrotate process any further.
Execute:
cat << EOF > /etc/cron.hourly/keystone_token_flush /usr/local/bin/keystone_token_flush.sh >> /var/log/keystone/keystone_token_flush.log 2>&1 EOF chmod +x /usr/local/bin/keystone_token_flush.sh chmod 0644 /etc/cron.d/keystone_token_flush
Define the TCP port range allowed for inter-node communication (this is needed for cluster mode of RabbitMQ)
\rm -f /etc/rabbitmq/rabbitmq.config cat << EOF >> /etc/rabbitmq/rabbitmq.config [{kernel, [ {inet_dist_listen_min, 9100}, {inet_dist_listen_max, 9110} ]}]. EOF
Correct the logrotate configuration file to use rabbitmqctl instead of the deprecated 'service' command syntax
sed -i '/rabbitmq-server rotate-logs/s+/sbin/service rabbitmq-server rotate-logs+/usr/sbin/rabbitmqctl rotate_logs+' /etc/logrotate.d/rabbitmq-server
Start and enable Rabbit
systemctl start rabbitmq-server
systemctl enable rabbitmq-server
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh
that you created above:
source keystone_admin.sh export OS_SERVICE_TOKEN=`cat ~/ks_admin_token` export OS_SERVICE_ENDPOINT=http://192.168.60.180:35357/v2.0
Ensure SERVICE_ENDPOINT and SERVICE_TOKEN are both set.
Then create the Glance user and image service in the Keystone's database:
keystone user-create --name glance --pass GLANCE_PASS keystone user-role-add --user glance --role admin --tenant services keystone service-create --name glance --type image --description "Glance Image Service" keystone endpoint-create --service glance --publicurl "http://90.147.143.10:9292" --adminurl "http://90.147.143.10:9292" --internalurl "http://192.168.60.180:9292"
Login into the primary controller node, modify the relevant configuration files:
glance-api.conf
while read i do openstack-config --set /etc/glance/glance-api.conf ${i} done << EOF DEFAULT bind_host 0.0.0.0 DEFAULT registry_host 192.168.60.180 DEFAULT notification_driver noop DEFAULT sql_connection "mysql://glance:<GLANCE_DB_PWD>@192.168.60.180:5306/glance" DEFAULT sql_idle_timeout 30 keystone_authtoken auth_host 192.168.60.180 keystone_authtoken auth_port 35357 keystone_authtoken auth_protocol http keystone_authtoken auth_uri http://192.168.60.180:35357/v2.0 keystone_authtoken admin_tenant_name services keystone_authtoken admin_user glance keystone_authtoken admin_password GLANCE_PASS paste_deploy flavor "keystone+cachemanagement" EOF # The following parameter should equals the CPU number openstack-config --set /etc/glance/glance-api.conf DEFAULT workers 8
glance-registry.conf
while read i do openstack-config --set /etc/glance/glance-registry.conf ${i} done << EOF DEFAULT bind_host 0.0.0.0 keystone_authtoken admin_tenant_name services keystone_authtoken admin_user glance keystone_authtoken admin_password GLANCE_PASS keystone_authtoken auth_host 192.168.60.180 keystone_authtoken auth_port 35357 keystone_authtoken auth_protocol http keystone_authtoken auth_uri http://192.168.60.180:35357/v2.0 database connection "mysql://glance:<GLANCE_DB_PWD>@192.168.60.180:5306/glance" database idle_timeout 30 paste_deploy flavor keystone EOF
While still logged into the primary controller node, prepare the paths:
mkdir -p /var/run/glance /var/log/glance chown -R glance /var/log/glance chown -R glance /var/run/glance chown -R glance:glance /var/lib/glance
… and initialize the Glance's database:
# su glance -s /bin/sh /root $ glance-manage db_sync /root $ exit
As of 26/02/2015 there is a bug that might prevent the script from working. Check /var/log/glance/api.log
for messages like these:
CRITICAL glance [-] ValueError: Tables "migrate_version" have non utf8 collation, please make sure all tables are CHARSET=utf8 glance Traceback (most recent call last): glance File "/usr/bin/glance-manage", line 10, in <module> glance sys.exit(main()) glance File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 259, in main glance return CONF.command.action_fn() glance File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 160, in sync glance CONF.command.current_version) glance File "/usr/lib/python2.7/site-packages/glance/cmd/manage.py", line 137, in sync glance sanity_check=self._need_sanity_check()) glance File "/usr/lib/python2.7/site-packages/glance/openstack/common/db/sqlalchemy/migration.py", line 195, in db_sync glance _db_schema_sanity_check(engine) glance File "/usr/lib/python2.7/site-packages/glance/openstack/common/db/sqlalchemy/migration.py", line 221, in _db_schema_sanity_check glance ) % ','.join(table_names)) glance ValueError: Tables "migrate_version" have non utf8 collation, please make sure all tables are CHARSET=utf8 glance
The workaround is (see https://bugs.launchpad.net/oslo-incubator/+bug/1301036/comments/17 ):
mysql -u glance -h 192.168.60.10 -p use glance; alter table migrate_version convert to character set utf8; exit;
Than reissue
# su glance -s /bin/sh /root $ glance-manage db_sync /root $ exit
To prevent unprivileged users to register public image, change the policy in /etc/glance/policy.json
:
"publicize_image": "",
to
"publicize_image": "role:admin",
Always sitting on the primary controller node, start and enable the Glance services:
systemctl start openstack-glance-registry systemctl start openstack-glance-api systemctl enable openstack-glance-registry systemctl enable openstack-glance-api
… and finally create the credential file for glance
cat << EOF > glancerc export OS_USERNAME=glance export OS_TENANT_NAME=services export OS_PASSWORD=GLANCE_PASS export OS_AUTH_URL=http://192.168.60.180:35357/v2.0/ EOF
You can copy the credential file to any machine you like, where you've installed the Python Glance's command line (yum -y install python-glanceclient
). From this machine you can access the Glance service (list images, create images, delete images, etc.).
In order to check that Glance is correctly installed, login into any machines where you've installed the Glance's command line and source the glancerc
script that you've copied from the primary controller node; then try these commands:
pcmazzon ~ $ wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img [...] Saving to: “cirros-0.3.1-x86_64-disk.img” [...] 2013-12-06 12:25:03 (3.41 MB/s) - “cirros-0.3.1-x86_64-disk.img” saved [13147648/13147648] pcmazzon ~ $ glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=True < cirros-0.3.1-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | d972013792949d0d3ba628fbe8685bce | | container_format | bare | | created_at | 2015-02-26T16:14:40 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 0fb09e44-a25c-49e2-a046-191a7989aebc | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 1af77118d9db4c9a959810aa7d67c6d8 | | protected | False | | size | 13147648 | | status | active | | updated_at | 2015-02-26T16:14:43 | | virtual_size | None | +------------------+--------------------------------------+ pcmazzon ~ $ glance index ID Name Disk Format Container Format Size ------------------------------------ ------------------------------ -------------------- -------------------- -------------- 0fb09e44-a25c-49e2-a046-191a7989aebc cirros qcow2 bare 13147648
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh that you created above:
source keystone_admin.sh
Add NOVA service, user and endpoint to Keystone's database:
keystone user-create --name nova --pass NOVA_PASS keystone user-role-add --user nova --role admin --tenant services keystone service-create --name nova --type compute --description "OpenStack Compute Service" SERVICE_NOVA_ID=`keystone service-list|grep nova|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NOVA_ID \ --publicurl http://90.147.143.10:8774/v2/%\(tenant_id\)s \ --adminurl http://90.147.143.10:8774/v2/%\(tenant_id\)s \ --internalurl http://192.168.60.180:8774/v2/%\(tenant_id\)s keystone service-create --name nova_ec2 --type ec2 --description "EC2 Service" SERVICE_EC2_ID=`keystone service-list|grep nova_ec2|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_EC2_ID \ --publicurl http://90.147.143.10:8773/services/Cloud \ --adminurl http://90.147.143.10:8773/services/Admin \ --internalurl http://192.168.60.180:8773/services/Cloud
Login into the primary controller node and modify the relevant configuration files:
nova.conf:
while read i do openstack-config --set /etc/nova/nova.conf ${i} done << EOF database connection "mysql://nova:<NOVA_DB_PWD>@192.168.60.180:5306/nova" database idle_timeout 30 DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu DEFAULT rabbit_hosts 192.168.60.152:5672,192.168.60.153:5672 DEFAULT rabbit_ha_queues True DEFAULT glance_host 192.168.60.180 DEFAULT my_ip 192.168.60.180 DEFAULT vncserver_listen 90.147.143.10 DEFAULT vncserver_proxyclient_address 192.168.60.180 DEFAULT auth_strategy keystone keystone_authtoken auth_host 192.168.60.180 keystone_authtoken auth_protocol http keystone_authtoken auth_port 35357 keystone_authtoken admin_user nova keystone_authtoken admin_tenant_name services keystone_authtoken admin_password NOVA_PASS DEFAULT api_paste_config /etc/nova/api-paste.ini DEFAULT neutron_metadata_proxy_shared_secret METADATA_PASS DEFAULT service_neutron_metadata_proxy true DEFAULT memcached_servers 192.168.60.152:11211,192.168.60.153:11211 DEFAULT enabled_apis ec2,osapi_compute,metadata DEFAULT ec2_listen 0.0.0.0 DEFAULT ec2_listen_port 8773 DEFAULT cpu_allocation_ratio 4.0 EOF # eliminato ????' # openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
api-paste.ini:
while read i do openstack-config --set /etc/nova/api-paste.ini ${i} done <<EOF filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory filter:authtoken auth_host 192.168.60.180 filter:authtoken auth_port 35357 filter:authtoken auth_protocol http filter:authtoken auth_uri http://192.168.60.180:5000/v2.0 filter:authtoken admin_tenant_name services filter:authtoken admin_user nova filter:authtoken admin_password NOVA_PASS EOF
While still logged into the primary controller node, initialize the database (NOTE that this is db sync without '_'):
# su nova -s /bin/sh /root $ nova-manage db sync /root $ exit
Modify the file /etc/nova/policy.json so that users can manage only their VMs:
# cd /etc/nova/ # patch -p0 << EOP --- /etc/nova/policy.json.orig 2015-03-04 10:23:54.042132305 +0100 +++ /etc/nova/policy.json 2015-03-04 10:37:32.581084403 +0100 @@ -1,7 +1,8 @@ { "context_is_admin": "role:admin", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", - "default": "rule:admin_or_owner", + "admin_or_user": "is_admin:True or user_id:%(user_id)s", + "default": "rule:admin_or_user", "cells_scheduler_filter:TargetCellFilter": "is_admin:True", @@ -9,6 +10,7 @@ "compute:create:attach_network": "", "compute:create:attach_volume": "", "compute:create:forced_host": "is_admin:True", + "compute:get": "rule:admin_or_owner", "compute:get_all": "", "compute:get_all_tenants": "", "compute:start": "rule:admin_or_owner", EOP
You should receive the message:
patching file policy.json
Start and enable the nova services:
systemctl start openstack-nova-api systemctl start openstack-nova-cert systemctl start openstack-nova-consoleauth systemctl start openstack-nova-scheduler systemctl start openstack-nova-conductor systemctl start openstack-nova-novncproxy systemctl enable openstack-nova-api systemctl enable openstack-nova-cert systemctl enable openstack-nova-consoleauth systemctl enable openstack-nova-scheduler systemctl enable openstack-nova-conductor systemctl enable openstack-nova-novncproxy
Preferrably from your desktop, or wherever you've copied the keystone_admin.sh
and installed the NOVA's command line, try to execute:
pcmazzon ~ $ nova service-list +------------------+-----------------------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------------------+----------+---------+-------+----------------------------+-----------------+ | nova-consoleauth | cld-blu-03.cloud.pd.infn.it | internal | enabled | up | 2015-03-04T09:43:08.000000 | - | | nova-conductor | cld-blu-03.cloud.pd.infn.it | internal | enabled | up | 2015-03-04T09:43:08.000000 | - | | nova-scheduler | cld-blu-03.cloud.pd.infn.it | internal | enabled | up | 2015-03-04T09:43:08.000000 | - | | nova-cert | cld-blu-03.cloud.pd.infn.it | internal | enabled | up | 2015-03-04T09:43:08.000000 | - | +------------------+-----------------------------+----------+---------+-------+----------------------------+-----------------+ pcmazzon ~ $ nova availability-zone-list +--------------------------------+----------------------------------------+ | Name | Status | +--------------------------------+----------------------------------------+ | internal | available | | |- cld-blu-03.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2015-03-04T09:45:18.000000 | | | |- nova-consoleauth | enabled :-) 2015-03-04T09:45:18.000000 | | | |- nova-scheduler | enabled :-) 2015-03-04T09:45:18.000000 | | | |- nova-cert | enabled :-) 2015-03-04T09:45:18.000000 | +--------------------------------+----------------------------------------+ pcmazzon ~ $ nova endpoints +-------------+----------------------------------+ | glance | Value | +-------------+----------------------------------+ | adminURL | http://90.147.143.10:9292 | | id | 3d9b63cc4b624220a3db1a2da99b241f | | internalURL | http://192.168.60.180:9292 | | publicURL | http://90.147.143.10:9292 | | region | regionOne | +-------------+----------------------------------+ +-------------+----------------------------------------------------------------+ | nova | Value | +-------------+----------------------------------------------------------------+ | adminURL | http://90.147.143.10:8774/v2/7114e3c97144459a9c8afbb9f09b4def | | id | 04fcb15180a34ec7a239888decfd55dd | | internalURL | http://192.168.60.180:8774/v2/7114e3c97144459a9c8afbb9f09b4def | | publicURL | http://90.147.143.10:8774/v2/7114e3c97144459a9c8afbb9f09b4def | | region | regionOne | | serviceName | nova | +-------------+----------------------------------------------------------------+ +-------------+-------------------------------------------+ | nova_ec2 | Value | +-------------+-------------------------------------------+ | adminURL | http://90.147.143.10:8773/services/Admin | | id | 36d3d9f4007a4aeabb639530f4400d89 | | internalURL | http://192.168.60.180:8773/services/Cloud | | publicURL | http://90.147.143.10:8773/services/Cloud | | region | regionOne | +-------------+-------------------------------------------+ +-------------+----------------------------------+ | keystone | Value | +-------------+----------------------------------+ | adminURL | http://90.147.143.10:35357/v2.0 | | id | 208f9156abf945509993babdb46579d9 | | internalURL | http://192.168.60.180:5000/v2.0 | | publicURL | http://90.147.143.10:5000/v2.0 | | region | regionOne | +-------------+----------------------------------+
Even better if the above commands can be tried from your desktop, after sourcing the keystone_admin.sh
.
usermod -s /bin/bash nova mkdir -p -m 700 ~nova/.ssh chown nova.nova ~nova/.ssh su - nova $ cd .ssh $ ssh-keygen -f id_rsa -b 1024 -P "" $ cp id_rsa.pub authorized_keys $ cat << EOF >> config Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null EOF $ exit
Distribute the content of /nova/.ssh
to the second controller node.
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh that you created above:
source ~/keystone_admin.sh
Then, create the endpoint, service and user information in the Keystone's database for Neutron:
keystone user-create --name neutron --pass NEUTRON_PASS keystone user-role-add --user neutron --role admin --tenant services keystone service-create --name neutron --type network --description "OpenStack Networking Service" SERVICE_NEUTRON_ID=`keystone service-list|grep neutron|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NEUTRON_ID \ --publicurl "http://90.147.143.10:9696" \ --adminurl "http://90.147.143.10:9696" \ --internalurl "http://192.168.60.180:9696"
Login into the primary controller node and modify the configuration files.
neutron.conf:
while read i do openstack-config --set /etc/neutron/neutron.conf ${i} done << EOF keystone_authtoken auth_host 192.168.60.180 keystone_authtoken admin_tenant_name services keystone_authtoken admin_user neutron keystone_authtoken admin_password NEUTRON_PASS keystone_authtoken auth_url http://192.168.60.180:35357/v2.0 keystone_authtoken auth_uri http://192.168.60.180:35357/v2.0 DEFAULT auth_strategy keystone DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu DEFAULT rabbit_hosts 192.168.60.152:5672,192.168.60.153:5672 DEFAULT rabbit_ha_queues True DEFAULT core_plugin ml2 DEFAULT service_plugins router database connection "mysql://neutron:<NEUTRON_DB_PWD>@192.168.60.180:4306/neutron" DEFAULT verbose False DEFAULT dhcp_agents_per_network 2 DEFAULT dhcp_lease_duration 86400 DEFAULT agent_down_time 75 DEFAULT notify_nova_on_port_status_changes True DEFAULT notify_nova_on_port_data_changes True DEFAULT nova_url http://192.168.60.180:8774/v2 DEFAULT nova_admin_username nova #attention the value in the result of command DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }') DEFAULT nova_admin_password NOVA_PASS DEFAULT nova_admin_auth_url http://192.168.60.180:35357/v2.0 agent report_interval 30 EOF openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
api-paste.ini:
while read i do openstack-config --set /etc/neutron/api-paste.ini ${i} done << EOF filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory filter:authtoken admin_tenant_name services filter:authtoken admin_user neutron filter:authtoken admin_password NEUTRON_PASS EOF
ml2_conf.ini:
while read i do openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ${i} done << EOF ml2 type_drivers gre,vlan,flat ml2 tenant_network_types gre ml2 mechanism_drivers openvswitch ml2_type_gre tunnel_id_ranges 1:1000 securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver securitygroup enable_security_group True EOF ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
nova.conf:
while read i do openstack-config --set /etc/nova/nova.conf ${i} done << EOF DEFAULT network_api_class nova.network.neutronv2.api.API DEFAULT neutron_url http://192.168.60.180:9696 DEFAULT neutron_auth_strategy keystone DEFAULT neutron_admin_tenant_name services DEFAULT neutron_admin_username neutron DEFAULT neutron_admin_password NEUTRON_PASS DEFAULT neutron_admin_auth_url http://192.168.60.180:35357/v2.0 DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver DEFAULT security_group_api neutron EOF
Restart NOVA's services (since you've just modified its configuration file)
systemctl restart openstack-nova-api systemctl restart openstack-nova-scheduler systemctl restart openstack-nova-conductor
While still logged into the primary controller node, start and enable the Neutron server:
neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini stamp head
It's output should be like:
No handlers could be found for logger "neutron.common.legacy" INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL.
Now start neutron-server
systemctl start neutron-server
systemctl enable neutron-server
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh
that you created above:
source ~/keystone_admin.sh
Then, create the endpoint, service and user information in the Keystone's database for Cinder:
keystone user-create --name cinder --pass CINDER_PASS keystone user-role-add --user cinder --role admin --tenant services keystone service-create --name cinder --type volume --description "Cinder Volume Service" keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2" keystone endpoint-create --service cinder --publicurl http://90.147.143.10:8776/v1/%\(tenant_id\)s --internalurl http://192.168.60.180:8776/v1/%\(tenant_id\)s --adminurl http://90.147.143.10:8776/v1/%\(tenant_id\)s keystone endpoint-create --service cinderv2 --publicurl http://90.147.143.10:8776/v2/%\(tenant_id\)s --internalurl http://192.168.60.180:8776/v2/%\(tenant_id\)s --adminurl http://90.147.143.10:8776/v2/%\(tenant_id\)s
Login into the primary controller node and modify the configuration files.
cinder.conf:
while read i do openstack-config --set /etc/cinder/cinder.conf ${i} done << EOF DEFAULT auth_strategy keystone keystone_authtoken auth_host 192.168.60.180 keystone_authtoken admin_tenant_name services keystone_authtoken admin_user cinder keystone_authtoken admin_password CINDER_PASS DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu DEFAULT rabbit_hosts 192.168.60.152:5672,192.168.60.153:5672 DEFAULT rabbit_ha_queues True DEFAULT sql_idle_timeout 30 DEFAULT rootwrap_config /etc/cinder/rootwrap.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini DEFAULT control_exchange cinder DEFAULT sql_connection "mysql://cinder:<CINDER_DB_PWD>@192.168.60.180:4306/cinder" EOF #openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen 192.168.60.180
Initialize the Cinder database (NOTE that this is db sync without '_'):
# su cinder -s /bin/sh /root $ cinder-manage db sync /root $ exit
Modify the file /etc/cinder/policy.json so that users can manage only their volumes:
# cd /etc/cinder/ # patch -p0 << EOP --- policy.json.orig 2015-09-23 10:38:52.499132043 +0200 +++ policy.json 2015-09-22 16:14:20.894780883 +0200 @@ -1,7 +1,8 @@ { "context_is_admin": [["role:admin"]], "admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], + "admin_or_user": [["is_admin:True"], ["user_id:%(user_id)s"]], + "default": [["rule:admin_or_user"]], "admin_api": [["is_admin:True"]], EOP
You should receive the message:
patching file policy.json
And finally start API services:
systemctl start openstack-cinder-api systemctl enable openstack-cinder-api systemctl start openstack-cinder-scheduler systemctl enable openstack-cinder-scheduler
Modify the file /etc/openstack-dashboard/local_settings
: look for OPENSTACK_HYPERVISOR_FEATURES = { and set 'can_set_password': True
Modify the file /etc/openstack-dashboard/local_settings
: look for the CACHES string, and substitute whatever is there with:
CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '192.168.60.152:11211', } }
you can try this command:
sed -i "s+django\.core\.cache\.backends\.locmem\.LocMemCache'+django\.core\.cache\.backends\.memcached\.MemcachedCache',\n\t'LOCATION' : '192.168.60.152:11211',+" /etc/openstack-dashboard/local_settings
Note that the TCP port 11211 and IP address must match those ones contained in the file /etc/sysconfig/memcached
:
PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 192.168.60.152"
Now, look for the string OPENSTACK_HOST
in /etc/openstack-dashboard/local_settings and set it to:
OPENSTACK_HOST = "192.168.60.180"
by executing this command:
sed -i 's+OPENSTACK_HOST = "127.0.0.1"+OPENSTACK_HOST = "192.168.60.180"+' /etc/openstack-dashboard/local_settings
Modify the ALLOWED_HOST
parameter in /etc/openstack-dashboard/local_settings and set it to:
ALLOWED_HOSTS = ['*']
by executing the command
sed -i "s+ALLOWED_HOSTS = .*+ALLOWED_HOSTS = ['*']+" /etc/openstack-dashboard/local_settings
Execute the following commands:
sed -i 's+^Listen.*+Listen 192.168.60.152:80+' /etc/httpd/conf/httpd.conf echo "ServerName cld-blu-03.cedc.unipd.it:80" >> /etc/httpd/conf/httpd.conf echo "RedirectMatch permanent ^/$ /dashboard/" >> /etc/httpd/conf.d/openstack-dashboard.conf echo "RedirectMatch ^/$ /dashboard/" > /etc/httpd/conf.d/rootredirect.conf
To address an observed problem related to number of open files execute the following command:
cat << EOF >> /etc/security/limits.conf * soft nofile 4096 * hard nofile 4096 EOF
Start and enable the WebServer:
systemctl start httpd systemctl start memcached systemctl enable httpd systemctl enable memcached
Please, do not consider this configuration as optional. It should be done in order to crypt the users' passwords.
Install the mod_ssl
package on both controller nodes:
yum -y install mod_ssl
Execute the following commands:
#sed -i 's+^Listen.*+Listen 8443+' /etc/httpd/conf.d/ssl.conf #sed -i 's+VirtualHost _default_:443+VirtualHost _default_:8443+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateFile.*+SSLCertificateFile /etc/grid-security/hostcert.pem+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateKeyFile.*+SSLCertificateKeyFile /etc/grid-security/hostkey.pem+' /etc/httpd/conf.d/ssl.conf echo "RewriteEngine On" >> /etc/httpd/conf/httpd.conf echo "RewriteCond %{HTTPS} !=on" >> /etc/httpd/conf/httpd.conf echo "RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/\$1 [R,L]" >> /etc/httpd/conf/httpd.conf
Restart httpd:
systemctl restart httpd
You can stop here if you don't need the High Availability with the second node neither the SSL support.
Login into the secondary controller node and configure the RabbitMQ to use the already specified TCP port range:
\rm -f /etc/rabbitmq/rabbitmq.config cat << EOF >> /etc/rabbitmq/rabbitmq.config [{kernel, [ {inet_dist_listen_min, 9100}, {inet_dist_listen_max, 9110} ]}]. EOF
While still logged into the secondary controller node, start and enable Rabbit:
systemctl start rabbitmq-server
systemctl enable rabbitmq-server
This first start has generated the erlang cookie. Then stop the server:
systemctl stop rabbitmq-server
RabbitMQ's clustering requires that the nodes have the same Erlang cookie… then copy erlang cookie from the primary node and restart the server:
scp root@cld-blu-03.cloud.pd.infn.it:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie
Change cookie's ownership and restart the rabbit server
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie systemctl start rabbitmq-server
While logged into the secondary controller node, stop the application:
rabbitmqctl stop_app rabbitmqctl reset
… then join the server running in the primary node:
rabbitmqctl join_cluster rabbit@cld-blu-03 Clustering node 'rabbit@cld-blu-04' with 'rabbit@cld-blu-03' ... ...done. rabbitmqctl start_app Starting node 'rabbit@cld-blu-04' ... ...done. # see: http://goo.gl/y0aVmp rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
[root@cld-blu-04 ~]# rabbitmqctl cluster_status Cluster status of node 'rabbit@cld-blu-04' ... [{nodes,[{disc,['rabbit@cld-blu-03','rabbit@cld-blu-04']}]}, {running_nodes,['rabbit@cld-blu-03','rabbit@cld-blu-04']}, {partitions,[]}] ...done. [root@cld-blu-04 ~]# rabbitmqctl list_policies Listing policies ... / HA ^(?!amq\\.).* {"ha-mode":"all"} 0 ...done. [root@cld-blu-03 ~]# rabbitmqctl cluster_status Cluster status of node 'rabbit@cld-blu-03' ... [{nodes,[{disc,['rabbit@cld-blu-03','rabbit@cld-blu-04']}]}, {running_nodes,['rabbit@cld-blu-04','rabbit@cld-blu-03']}, {partitions,[]}] ...done. [root@cld-blu-03 ~]# rabbitmqctl list_policies Listing policies ... / HA ^(?!amq\\.).* {"ha-mode":"all"} 0 ...done.
Login into the secondary controller node; copy Keystone, Glance, Nova, Neutron, Cinder and Horizon's configurations from primary controller node:
scp cld-blu-03.cloud.pd.infn.it:/etc/openstack-dashboard/local_settings /etc/openstack-dashboard/ scp -r cld-blu-03.cloud.pd.infn.it:/etc/keystone /etc/ scp -r cld-blu-03.cloud.pd.infn.it:/etc/neutron /etc/ scp -r cld-blu-03.cloud.pd.infn.it:/etc/cinder /etc/ scp -r cld-blu-03.cloud.pd.infn.it:/etc/glance /etc/ scp -r cld-blu-03.cloud.pd.infn.it:/etc/nova /etc/ scp cld-blu-03.cloud.pd.infn.it:/etc/sysconfig/memcached /etc/sysconfig/ \rm -f /etc/neutron/plugin.ini ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
While still logged into the secondary controller node, finalize the setup:
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone mkdir -p /var/run/glance /var/log/glance mkdir -p /var/run/keystone /var/log/keystone chown -R glance:glance /var/log/glance /var/lib/glance /var/run/glance chown -R keystone:keystone /var/run/keystone /var/log/keystone /var/lib/keystone /etc/keystone/ssl/ chown -R neutron:neutron /var/lib/neutron
Setup Dashboard's parameters:
sed -i 's+^Listen.*+Listen 192.168.60.153:80+' /etc/httpd/conf/httpd.conf echo "ServerName cloud.cedc.csia.unipd.it:80" >> /etc/httpd/conf/httpd.conf echo "RedirectMatch permanent ^/$ /dashboard/" >> /etc/httpd/conf.d/openstack-dashboard.conf echo "RedirectMatch ^/$ /dashboard/" > /etc/httpd/conf.d/rootredirect.conf
Setup HTTPS for Dashboard:
sed -i 's+^SSLCertificateFile.*+SSLCertificateFile /etc/grid-security/hostcert.pem+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateKeyFile.*+SSLCertificateKeyFile /etc/grid-security/hostkey.pem+' /etc/httpd/conf.d/ssl.conf echo "RewriteEngine On" >> /etc/httpd/conf/httpd.conf echo "RewriteCond %{HTTPS} !=on" >> /etc/httpd/conf/httpd.conf echo "RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/\$1 [R,L]" >> /etc/httpd/conf/httpd.conf
Increase the number of allowed open files:
cat << EOF >> /etc/security/limits.conf * soft nofile 4096 * hard nofile 4096 EOF
… change the memcached's listening IP address
sed -i 's+192.168.60.152+192.168.60.153+' /etc/sysconfig/memcached
… change the location of the memcached service in the dashboard's config file:
sed -i 's+192.168.60.152:11211+192.168.60.153:11211+' /etc/openstack-dashboard/local_settings
… and finally turn all services ON, and enable them:
systemctl start openstack-keystone systemctl start openstack-glance-registry systemctl start openstack-glance-api systemctl start openstack-nova-api systemctl start openstack-nova-cert systemctl start openstack-nova-consoleauth systemctl start openstack-nova-scheduler systemctl start openstack-nova-conductor systemctl start openstack-nova-novncproxy systemctl start neutron-server systemctl start httpd systemctl start memcached systemctl start openstack-cinder-api systemctl start openstack-cinder-scheduler systemctl enable openstack-keystone systemctl enable openstack-glance-registry systemctl enable openstack-glance-api systemctl enable openstack-nova-api systemctl enable openstack-nova-cert systemctl enable openstack-nova-consoleauth systemctl enable openstack-nova-scheduler systemctl enable openstack-nova-conductor systemctl enable openstack-nova-novncproxy systemctl enable neutron-server systemctl enable httpd systemctl enable memcached systemctl enable openstack-cinder-api systemctl enable openstack-cinder-scheduler
On your desktop, source the file keystone_admin.sh
and try the commands:
bash-4.1$ nova availability-zone-list +--------------------------------+----------------------------------------+ | Name | Status | +--------------------------------+----------------------------------------+ | internal | available | | |- cld-blu-03.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2015-03-10T09:22:08.000000 | | | |- nova-consoleauth | enabled :-) 2015-03-10T09:21:59.000000 | | | |- nova-scheduler | enabled :-) 2015-03-10T09:22:04.000000 | | | |- nova-cert | enabled :-) 2015-03-10T09:22:02.000000 | | |- cld-blu-04.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2015-03-10T09:22:00.000000 | | | |- nova-cert | enabled :-) 2015-03-10T09:22:01.000000 | | | |- nova-consoleauth | enabled :-) 2015-03-10T09:22:04.000000 | | | |- nova-scheduler | enabled :-) 2015-03-10T09:22:00.000000 | +--------------------------------+----------------------------------------+ bash-4.1$ cinder service-list +------------------+-----------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated_at | +------------------+-----------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | cld-blu-03.cloud.pd.infn.it | nova | enabled | up | 2015-03-10T09:23:09.000000 | | cinder-scheduler | cld-blu-04.cloud.pd.infn.it | nova | enabled | up | 2015-03-10T09:23:05.000000 | +------------------+-----------------------------+------+---------+-------+----------------------------+
First of all, on both controller nodes, switch off all the OpenStack's services but Keystone (also do not stop memcached
):
systemctl stop openstack-glance-registry systemctl stop openstack-glance-api systemctl stop openstack-nova-api systemctl stop openstack-nova-cert systemctl stop openstack-nova-consoleauth systemctl stop openstack-nova-scheduler systemctl stop openstack-nova-conductor systemctl stop openstack-nova-novncproxy systemctl stop neutron-server systemctl stop httpd systemctl stop openstack-cinder-api systemctl stop openstack-cinder-scheduler
Before proceed note that
hostcert.pem
and hostkey.pem
files must be concatenated (with the cat
command) to create the unique file hostcertkey.pem
Modify haproxy.cfg
's content in the haproxy nodes, by substituting the lines listed above with the following (do not modify the two sections global
and defaults
):
listen mysql-cluster-roundrobin bind 192.168.60.180:3306 mode tcp balance roundrobin option httpchk server cld-blu-10.cloud.pd.infn.it 192.168.60.159:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-09.cloud.pd.infn.it 192.168.60.158:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-08.cloud.pd.infn.it 192.168.60.157:3306 check port 9200 inter 12000 rise 3 fall 3 listen mysql-cluster-roundrobin-public bind 90.147.143.10:3306 mode tcp balance roundrobin option httpchk server cld-blu-10.cloud.pd.infn.it 192.168.60.159:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-09.cloud.pd.infn.it 192.168.60.158:3306 check port 9200 inter 12000 rise 3 fall 3 server cld-blu-08.cloud.pd.infn.it 192.168.60.157:3306 check port 9200 inter 12000 rise 3 fall 3 listen nova-metadata bind 192.168.60.180:8775 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8775 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8775 check inter 2000 rise 2 fall 3 listen glance-registry bind 192.168.60.180:9191 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9191 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9191 check inter 2000 rise 2 fall 3 listen rabbitmq-server bind 192.168.60.180:5672 balance roundrobin mode tcp server cld-blu-03.cloud.pd.infn.it 192.168.60.152:5672 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:5672 check inter 2000 rise 2 fall 3 listen epmd bind 192.168.60.180:4369 balance roundrobin server cld-blu-03.cloud.pd.infn.it 192.168.60.152:4369 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:4369 check inter 2000 rise 2 fall 3 listen memcached-cluster bind 192.168.60.180:11211 balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:11211 check inter 2000 rise 2 fall 5 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:11211 check inter 2000 rise 2 fall 5 frontend dashboard-ssl_public bind 90.147.143.10:443 option tcplog mode tcp default_backend dashboard_ssl_nodes frontend keystone-auth bind 192.168.60.180:35357 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystoneauth frontend keystone-api bind 192.168.60.180:5000 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystoneauth frontend keystone-auth_public bind 90.147.143.10:35357 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystoneauth frontend keystone-api_public bind 90.147.143.10:5000 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystoneapi frontend glance-api bind 192.168.60.180:9292 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend glanceapi frontend glance-api_public bind 90.147.143.10:9292 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend glanceapi frontend nova-api bind 192.168.60.180:8774 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novaapi frontend nova-api_public bind 90.147.143.10:8774 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novaapi frontend cinder-api bind 192.168.60.180:8776 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend cinderapi frontend cinder-api_public bind 90.147.143.10:8776 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend cinderapi frontend neutron-server bind 192.168.60.180:9696 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend neutronapi frontend neutron-server_public bind 90.147.143.10:9696 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend neutronapi frontend vnc_public bind 90.147.143.10:6080 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novnc frontend novaec2-api bind 192.168.60.180:8773 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend ec2api frontend novaec2-api_public bind 90.147.143.10:8773 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/chain.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend ec2api backend dashboard_ssl_nodes balance source mode tcp option ssl-hello-chk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:443 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:443 check inter 2000 rise 2 fall 3 backend keystoneauth mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:35357 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:35357 check inter 2000 rise 2 fall 3 backend keystoneapi mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:5000 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:5000 check inter 2000 rise 2 fall 3 backend glanceapi mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9292 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9292 check inter 2000 rise 2 fall 3 backend novaapi mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8774 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8774 check inter 2000 rise 2 fall 3 backend ec2api mode http balance source option tcpka option tcplog server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8773 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8773 check inter 2000 rise 2 fall 3 backend cinderapi mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:8776 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:8776 check inter 2000 rise 2 fall 3 backend neutronapi mode http balance source option httpchk server cld-blu-03.cloud.pd.infn.it 192.168.60.152:9696 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:9696 check inter 2000 rise 2 fall 3 backend novnc mode http balance source server cld-blu-03.cloud.pd.infn.it 192.168.60.152:6080 check inter 2000 rise 2 fall 3 server cld-blu-04.cloud.pd.infn.it 192.168.60.153:6080 check inter 2000 rise 2 fall 3
… and restart the HAProxy daemon.
Now login into one of the two controller nodes, and do a precautionary unset all the OS_* variables:
unset OS_USERNAME unset OS_TENANT_NAME unset OS_PASSWORD unset OS_AUTH_URL
To get back access to Keystone issue the following commands:
export SERVICE_TOKEN=`cat ~/ks_admin_token` export SERVICE_ENDPOINT=http://192.168.60.152:35357/v2.0
~/ks_admin_token
has been created above at the very first Keystone's configuration.Change Keystone's endpoints:
KEYSTONE_SERVICE=$(keystone service-get keystone | grep ' id ' | awk '{print $4}') KEYSTONE_ENDPOINT=$(keystone endpoint-list | grep $KEYSTONE_SERVICE|awk '{print $2}') keystone endpoint-delete $KEYSTONE_ENDPOINT keystone endpoint-create --region RegionOne --service-id $KEYSTONE_SERVICE --publicurl "https://cloud.cedc.csia.unipd.it:\$(public_port)s/v2.0" --adminurl "https://cloud.cedc.csia.unipd.it:\$(admin_port)s/v2.0" --internalurl "https://cloud.cedc.csia.unipd.it:\$(public_port)s/v2.0"
Note: no need to restart keystone
because its communication is still not encrypted; the encription only occurs in the HAProxy frontend.
Change the keystone_admin.sh
you created above:
sed -i 's+export OS_AUTH_URL+#export OS_AUTH_URL+' $HOME/keystone_admin.sh echo "export OS_AUTH_URL=https://cloud.cedc.csia.unipd.it:5000/v2.0/" >> $HOME/keystone_admin.sh echo "export OS_CACERT=/etc/grid-security/chain.pem" >> $HOME/keystone_admin.sh
Note: no need to do anything on the second controller node. The endpoint with the https url has been changed on the MySQL database; all this is transparent for the keystone
service.
unset SERVICE_TOKEN unset SERVICE_ENDPOINT source $HOME/keystone_admin.sh keystone user-list
Modify authentication parameters on both controller nodes:
while read i do openstack-config --set /etc/glance/glance-api.conf ${i} done << EOF keystone_authtoken auth_host cloud.cedc.csia.unipd.it keystone_authtoken auth_protocol https keystone_authtoken auth_uri https://cloud.cedc.csia.unipd.it:35357/v2.0 keystone_authtoken cafile /etc/grid-security/chain.pem EOF while read i do openstack-config --set /etc/glance/glance-registry.conf ${i} done << EOF keystone_authtoken auth_host cloud.cedc.csia.unipd.it keystone_authtoken auth_protocol https keystone_authtoken auth_uri https://cloud.cedc.csia.unipd.it:35357/v2.0 keystone_authtoken cafile /etc/grid-security/chain.pem EOF
Execute this on one controller node only (or where you have the file keystone_admin.sh
):
source ~/keystone_admin.sh GLANCE_SERVICE=$(keystone service-get glance | grep ' id ' | awk '{print $4}') GLANCE_ENDPOINT=$(keystone endpoint-list | grep $GLANCE_SERVICE|awk '{print $2}') keystone endpoint-delete $GLANCE_ENDPOINT keystone endpoint-create --service glance --publicurl "https://cloud.cedc.csia.unipd.it:9292" --adminurl "https://cloud.cedc.csia.unipd.it:9292" --internalurl "https://cloud.cedc.csia.unipd.it:9292"
Restart Glance on both controller nodes:
systemctl restart openstack-glance-api systemctl restart openstack-glance-registry
Modify authentication parameters on both controller nodes:
while read i do openstack-config --set /etc/nova/nova.conf ${i} done << EOF keystone_authtoken auth_host cloud.cedc.csia.unipd.it keystone_authtoken auth_protocol https keystone_authtoken cafile /etc/grid-security/chain.pem DEFAULT neutron_ca_certificates_file /etc/grid-security/chain.pem DEFAULT cinder_ca_certificates_file /etc/grid-security/chain.pem DEFAULT glance_host cloud.cedc.csia.unipd.it DEFAULT glance_protocol https DEFAULT glance_api_servers https://cloud.cedc.csia.unipd.it:9292 DEFAULT glance_api_insecure true DEFAULT neutron_url https://cloud.cedc.csia.unipd.it:9696 EOF openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host cloud.cedc.csia.unipd.it openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol https openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_uri https://cloud.cedc.csia.unipd.it:5000/v2.0
On one controller node only (or where you have keystone_admin.sh
):
NOVA_SERVICE=$(keystone service-get nova | grep ' id ' | awk '{print $4}') NOVA_ENDPOINT=$(keystone endpoint-list | grep $NOVA_SERVICE|awk '{print $2}') keystone endpoint-delete $NOVA_ENDPOINT keystone endpoint-create --service-id $NOVA_SERVICE --publicurl https://cloud.cedc.csia.unipd.it:8774/v2/%\(tenant_id\)s --adminurl https://cloud.cedc.csia.unipd.it:8774/v2/%\(tenant_id\)s --internalurl https://cloud.cedc.csia.unipd.it:8774/v2/%\(tenant_id\)s NOVAEC2_SERVICE=$(keystone service-get nova_ec2 | grep ' id ' | awk '{print $4}') NOVAEC2_ENDPOINT=$(keystone endpoint-list | grep $NOVAEC2_SERVICE|awk '{print $2}') keystone endpoint-delete $NOVAEC2_ENDPOINT keystone endpoint-create --service-id $NOVAEC2_SERVICE --publicurl https://cloud.cedc.csia.unipd.it:8773/services/Cloud --adminurl https://cloud.cedc.csia.unipd.it:8773/services/Cloud --internalurl https://cloud.cedc.csia.unipd.it:8773/services/Cloud
Restart Nova on both controller nodes:
systemctl restart openstack-nova-api systemctl restart openstack-nova-cert systemctl restart openstack-nova-consoleauth systemctl restart openstack-nova-scheduler systemctl restart openstack-nova-conductor systemctl restart openstack-nova-novncproxy
Apply the patches in https://review.openstack.org/#/c/90626/ to allow SSL neutron → nova communications.
Modify authentication parameters on both controller nodes:
while read i do openstack-config --set /etc/neutron/neutron.conf ${i} done << EOF keystone_authtoken auth_protocol https keystone_authtoken auth_host cloud.cedc.csia.unipd.it keystone_authtoken auth_url https://cloud.cedc.csia.unipd.it:35357/v2.0 keystone_authtoken auth_uri https://cloud.cedc.csia.unipd.it:35357/v2.0 keystone_authtoken cafile /etc/grid-security/chain.pem DEFAULT nova_protocol https DEFAULT nova_url https://cloud.cedc.csia.unipd.it:8774/v2 DEFAULT nova_admin_username nova #DEFAULT nova_admin_tenant_id=$(keystone tenant-list | awk '/ services / { print$2 }') DEFAULT nova_admin_tenant_id 1af77118d9db4c9a959810aa7d67c6d8 DEFAULT nova_admin_password NOVA_PASS DEFAULT nova_admin_auth_url https://cloud.cedc.csia.unipd.it:35357/v2.0 DEFAULT nova_ca_certificates_file /etc/grid-security/chain.pem DEFAULT notify_nova_on_port_status_changes True DEFAULT notify_nova_on_port_data_changes True EOF openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url https://cloud.cedc.csia.unipd.it:35357/v2.0
On one controller only (or where you have the keystone_admin.sh
):
NEUTRON_SERVICE=$(keystone service-get neutron | grep ' id ' | awk '{print $4}') NEUTRON_ENDPOINT=$(keystone endpoint-list | grep $NEUTRON_SERVICE|awk '{print $2}') keystone endpoint-delete $NEUTRON_ENDPOINT keystone endpoint-create --service-id $NEUTRON_SERVICE --publicurl "https://cloud.cedc.csia.unipd.it:9696" --adminurl "https://cloud.cedc.csia.unipd.it:9696" --internalurl "https://cloud.cedc.csia.unipd.it:9696"
Restart Neutron and Nova on both controller nodes (nova needs to be restarted because its conf file has been changed):
systemctl restart neutron-server systemctl restart openstack-nova-api systemctl restart openstack-nova-cert systemctl restart openstack-nova-consoleauth systemctl restart openstack-nova-scheduler systemctl restart openstack-nova-conductor systemctl restart openstack-nova-novncproxy
Modify authentication parameters on both controller nodes:
while read i do openstack-config --set /etc/cinder/cinder.conf ${i} done << EOF keystone_authtoken auth_host cloud.cedc.csia.unipd.it keystone_authtoken auth_protocol https keystone_authtoken auth_uri https://cloud.cedc.csia.unipd.it:5000/v2.0 keystone_authtoken cafile /etc/grid-security/chain.pem EOF
On one controller only (or where you have the keystone_admin.sh
):
CINDER_SERVICE=$(keystone service-get cinder | grep ' id ' | awk '{print $4}') CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk '{print $2}') keystone endpoint-delete $CINDER_ENDPOINT CINDER_SERVICE=$(keystone service-get cinderv2 | grep ' id ' | awk '{print $4}') CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk '{print $2}') keystone endpoint-delete $CINDER_ENDPOINT keystone endpoint-create --service cinder --publicurl https://cloud.cedc.csia.unipd.it:8776/v1/%\(tenant_id\)s --adminurl https://cloud.cedc.csia.unipd.it:8776/v1/%\(tenant_id\)s --internalurl https://cloud.cedc.csia.unipd.it:8776/v1/%\(tenant_id\)s keystone endpoint-create --service cinderv2 --publicurl https://cloud.cedc.csia.unipd.it:8776/v2/%\(tenant_id\)s --adminurl https://cloud.cedc.csia.unipd.it:8776/v2/%\(tenant_id\)s --internalurl https://cloud.cedc.csia.unipd.it:8776/v2/%\(tenant_id\)s
Restart Cinder on both controller nodes:
systemctl restart openstack-cinder-api systemctl restart openstack-cinder-scheduler
Setup secure connection to Keystone:
sed -i 's+OPENSTACK_HOST = "192.168.60.180"+OPENSTACK_HOST = "cloud.cedc.csia.unipd.it"+' /etc/openstack-dashboard/local_settings sed -i 's+OPENSTACK_KEYSTONE_URL = "http:+OPENSTACK_KEYSTONE_URL = "https:+' /etc/openstack-dashboard/local_settings sed -i 's+# OPENSTACK_SSL_CACERT.*+OPENSTACK_SSL_CACERT="/etc/grid-security/chain.pem"+' /etc/openstack-dashboard/local_settings
Those two are recommended for SSL
sed -i 's+#CSRF_COOKIE_SECURE = True+CSRF_COOKIE_SECURE = True+' /etc/openstack-dashboard/local_settings sed -i 's+#SESSION_COOKIE_SECURE = True+SESSION_COOKIE_SECURE = True+' /etc/openstack-dashboard/local_settings
Prepare to patch Horizon's source files:
# # NOT APPLIED # Is it still needed? # yum install -y patch curl -o os_auth_patch_01.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_01.diff curl -o os_auth_patch_02.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_02.diff curl -o os_auth_patch_03.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_03.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/views.py < os_auth_patch_01.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/backend.py < os_auth_patch_02.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/user.py < os_auth_patch_03.diff
Restart apache web server:
systemctl restart httpd
See here