Author:
Changes:
memcached
to the management IP (also for dashboard's conf file)haproxy.cfg
for this problem: http://www.serverphorums.com/read.php?10,351306libvirt_vif_driver
(see Matteo Panella's mail 12/06/2014 12:14PM, Subject: "Havana e security group (fixed!)")neutron-db-manage
's outputexpiration=32400
in the keystone.conf file to workaround a bug cited herecpu_allocation_ratio
setting to 4.0rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}
'), see hereiptables
config filemod_ssl
to setup the Dashboard over https90.147.77.40
) for vncserver_listen
property in nova.conf
90.147.77.40
) for publicurl
property of all endpoints.nova.virt.firewall.NoopFirewallDriver
" in nova.conf
to be synchronized with the compute node's nova.conf
.flavor
from "keystone" to "keystone+cachemanagement" for glance-api.conf
sql_idle_timeout = 30
to glance-api.conf
as suggested by the comment in the file and shown herenova.conf
rabbit_ha_queues
parameter for Nova and Neutronglance_host
parameter to nova.conf
rabbit_hosts
parameter to neutron.conf
for HA RabbitMQnova.conf
Note: The HAProxy link above refers to a configuration for the highly available MySQL cluster. Below, it is explained how to configure HAProxy also for the OpenStack services.
Two nodes with:
root@controller-01 ~]# grep ENA /etc/sysconfig/yum-autoupdate # ENABLED ENABLED="false"
/var/lib/glance/images
(which must be already created) where to store the machines images; note that this storage must be shared between the two controller nodes in order to make both glance instances working correctly. This applies to both controller nodes./etc/selinux/config
)192.168.60.10
)192.168.60.40
for mgmt net and 90.147.77.40
for public net)[root@ha-proxy-01 ~]# ll /etc/grid-security/ total 8 -rw-r--r-- 1 root root 1476 May 6 16:59 hostcert.pem -rw------- 1 root root 916 May 6 16:59 hostkey.pem
[root@controller-01 ~]# ll /etc/grid-security/certificates/INFN-CA-2006.pem -rw-r--r--. 1 root root 1257 Mar 24 04:17 /etc/grid-security/certificates/INFN-CA-2006.pem
Execute the following commands:
# allow traffic toward rabbitmq server iptables -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 4369 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 35197 -j ACCEPT iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100:9110 -j ACCEPT # allow traffic toward keystone iptables -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT # allow traffic to glance-api iptables -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT # allow traffic to glance-registry iptables -A INPUT -p tcp -m multiport --dports 9191 -j ACCEPT # allow traffic to Nova EC2 API iptables -A INPUT -p tcp -m multiport --dports 8773 -j ACCEPT # allow traffic to Nova API iptables -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT # allow traffic to Nova Metadata server iptables -A INPUT -p tcp -m multiport --dports 8775 -j ACCEPT # allow traffic to Nova VNC proxy iptables -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT # allow traffic to Neutron Server iptables -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT # allow traffic to Dashboard iptables -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT # allow traffic to memcached iptables -A INPUT -p tcp -m multiport --dports 11211 -j ACCEPT # allow traffic to Cinder API iptables -A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT # permit ntpd's udp communications iptables -A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT mv /etc/sysconfig/iptables /etc/sysconfig/iptables.orig iptables-save > /etc/sysconfig/iptables chkconfig iptables on chkconfig ip6tables off service iptables restart
Also the HAProxy nodes must allow traffic through the same TCP ports:
iptables -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 4369 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 35197 -j ACCEPT iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100:9110 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 9191 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8773 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8775 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8776 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8004 -j ACCEPT iptables -A INPUT -p tcp -m multiport --dports 8000 -j ACCEPT # permit ntpd's udp communications iptables -A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT mv /etc/sysconfig/iptables /etc/sysconfig/iptables.orig iptables-save > /etc/sysconfig/iptables chkconfig iptables on chkconfig ip6tables off service iptables restart
The HAProxy nodes run the haproxy and keepalived daemons. HAProxy redirects connection from the external world to the controller nodes (users who want to connect to glance/nova/neutron/etc.). Keepalived is responsible to group the HAProxy nodes (3 in our infrastructure) in order to make them highly available with a Virtual Public IP (VIP).
This guide will assume (as mentioned above) that HAProxy has been already configured for the MySQL cluster, then only the additional part for OpenStack will be shown here.
Log into the HAProxy node(s) and put the following lines to the file(s) /etc/haproxy/haproxy.cfg
:
global chroot /var/lib/haproxy log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 uid 188 gid 188 daemon tune.ssl.default-dh-param 4096 tune.maxrewrite 65536 tune.bufsize 65536 defaults log global maxconn 8000 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s listen dashboard_public_ssl bind 90.147.77.40:443 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:443 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:443 check inter 2000 rise 2 fall 3 listen dashboard_public bind 90.147.77.40:80 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:80 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:80 check inter 2000 rise 2 fall 3 listen vnc bind 192.168.60.40:6080 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:6080 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:6080 check inter 2000 rise 2 fall 3 listen vnc_public bind 90.147.77.40:6080 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:6080 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:6080 check inter 2000 rise 2 fall 3 listen keystone_auth_public bind 90.147.77.40:35357 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:35357 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:35357 check inter 2000 rise 2 fall 3 listen keystone_api_public bind 90.147.77.40:5000 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:5000 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:5000 check inter 2000 rise 2 fall 3 listen keystone_auth bind 192.168.60.40:35357 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:35357 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:35357 check inter 2000 rise 2 fall 3 listen keystone_api bind 192.168.60.40:5000 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:5000 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:5000 check inter 2000 rise 2 fall 3 listen glance_api bind 192.168.60.40:9292 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9292 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9292 check inter 2000 rise 2 fall 3 listen glance_api_public bind 90.147.77.40:9292 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9292 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9292 check inter 2000 rise 2 fall 3 listen glance_registry bind 192.168.60.40:9191 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9191 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9191 check inter 2000 rise 2 fall 3 listen novaec2-api bind 192.168.60.40:8773 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8773 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8773 check inter 2000 rise 2 fall 3 listen novaec2-api_public bind 90.147.77.40:8773 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8773 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8773 check inter 2000 rise 2 fall 3 listen nova-api bind 192.168.60.40:8774 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8774 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8774 check inter 2000 rise 2 fall 3 listen nova-api_public bind 90.147.77.40:8774 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8774 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8774 check inter 2000 rise 2 fall 3 listen nova-metadata bind 192.168.60.40:8775 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8775 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8775 check inter 2000 rise 2 fall 3 listen nova-metadata_public bind 90.147.77.40:8775 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8775 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8775 check inter 2000 rise 2 fall 3 listen cinder-api_public bind 90.147.77.40:8776 balance source option tcpka option tcplog option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:8776 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8776 check inter 2000 rise 2 fall 3 listen neutron-server bind 192.168.60.40:9696 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9696 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9696 check inter 2000 rise 2 fall 3 listen neutron-server_public bind 90.147.77.40:9696 balance source option tcpka option httpchk option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9696 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9696 check inter 2000 rise 2 fall 3 listen rabbitmq-server bind 192.168.60.40:5672 balance roundrobin mode tcp server controller-01.cloud.pd.infn.it 192.168.60.41:5672 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:5672 check inter 2000 rise 2 fall 3 listen epmd bind 192.168.60.40:4369 balance roundrobin server controller-01.cloud.pd.infn.it 192.168.60.41:4369 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:4369 check inter 2000 rise 2 fall 3 listen memcached_cluster bind 192.168.60.40:11211 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:11211 check inter 2000 rise 2 fall 5 server controller-02.cloud.pd.infn.it 192.168.60.44:11211 check inter 2000 rise 2 fall 5
Check the syntax of the file you've just modifed:
[root@ha-proxy-04 haproxy]# haproxy -c -f /etc/haproxy/haproxy.cfg Configuration file is valid
Restart HAProxy:
service haproxy restart
Login into the MySQL node.
Remove previously created users and databases, if any:
mysql -u root drop database if exists keystone; drop database if exists glance; drop database if exists nova; drop database if exists neutron; drop database if exists cinder; drop user 'keystone'@'localhost'; drop user 'keystone'@'192.168.60.%'; drop user 'glance'@'localhost'; drop user 'glance'@'192.168.60.%'; drop user 'nova'@'localhost'; drop user 'nova'@'192.168.60.%'; drop user 'neutron'@'localhost'; drop user 'neutron'@'192.168.60.%'; drop user 'cinder'@'192.168.60.%'; drop user 'cinder'@'localhost'; flush privileges; quit
Create database and grant users:
mysql -u root CREATE DATABASE keystone; GRANT ALL ON keystone.* TO 'keystone'@'192.168.60.%' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; CREATE DATABASE glance; GRANT ALL ON glance.* TO 'glance'@'192.168.60.%' IDENTIFIED BY '<GLANCE_DB_PWD>'; GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '<GLANCE_DB_PWD>'; CREATE DATABASE nova; GRANT ALL ON nova.* TO 'nova'@'192.168.60.%' IDENTIFIED BY '<NOVA_DB_PWD>'; GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '<NOVA_DB_PWD>'; CREATE DATABASE neutron; GRANT ALL ON neutron.* TO 'neutron'@'192.168.60.%' IDENTIFIED BY '<NEUTRON_DB_PWD>'; GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '<NEUTRON_DB_PWD>'; CREATE DATABASE cinder; GRANT ALL ON cinder.* TO 'cinder'@'192.168.60.%' IDENTIFIED BY '<CINDER_DB_PWD>'; GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '<CINDER_DB_PWD>'; FLUSH PRIVILEGES; commit; quit
Logout from MySQL node.
We assume that the controller nodes have the following setup:
192.168.60.0/24
controller-01.cloud.pd.infn.it
(192.168.60.41
), controller-01.pd.infn.it
(90.147.77.41
)controller-01.cloud.pd.infn.it
(192.168.60.44
), controller-02.pd.infn.it
(90.147.77.44
)First install the YUM repo from RDO:
yum install -y http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
When the support to Havana is decomissioned, the repo will change. Then do the following:
yum -y install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-havana/rdo-release-havana-9.noarch.rpm sed -i 's+openstack/+openstack/EOL/+' /etc/yum.repos.d/rdo-release.repo
Install the packages for Keystone, Glance, Nova, Neutron and Horizon(Dashboard):
yum install -y openstack-keystone python-keystoneclient openstack-utils \ openstack-nova python-novaclient rabbitmq-server openstack-glance \ python-kombu python-anyjson python-amqplib openstack-neutron \ python-neutron python-neutronclient openstack-neutron-openvswitch mysql \ memcached python-memcached mod_wsgi openstack-dashboard \ openstack-cinder openstack-utils mod_ssl
Apply a workaround to a known bug (see this page for more info):
openstack-config --set /etc/keystone/keystone.conf token expiration 32400
Proceed with Keystone setup:
export SERVICE_TOKEN=$(openssl rand -hex 10) echo $SERVICE_TOKEN > ~/ks_admin_token openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN openstack-config --set /etc/keystone/keystone.conf sql connection "mysql://keystone:<KEYSTONE_DB_PWD>@192.168.60.10/keystone" openstack-config --set /etc/keystone/keystone.conf DEFAULT bind_host 0.0.0.0 keystone-manage pki_setup --keystone-user keystone --keystone-group keystone chown -R keystone:keystone /var/log/keystone /etc/keystone/ssl/ su keystone -s /bin/sh -c "keystone-manage db_sync"
With a recent update to havana-9, the last command (keystone-manage
) produces this output (or a similar one with different numbers):
2014-07-23 08:58:35.037 31399 CRITICAL keystone [-] 10 is not 11
Just re-execute the command once again.
Start Keystone:
service openstack-keystone start chkconfig openstack-keystone on
Get access to Keystone and create the admin user and tenant:
export SERVICE_TOKEN=`cat ~/ks_admin_token` export SERVICE_ENDPOINT=http://192.168.60.40:35357/v2.0 keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" keystone endpoint-create --service keystone --publicurl http://90.147.77.40:5000/v2.0 --adminurl http://90.147.77.40:35357/v2.0 --internalurl http://192.168.60.40:5000/v2.0 keystone user-create --name admin --pass ADMIN_PASS keystone role-create --name admin keystone tenant-create --name admin keystone role-create --name Member keystone user-role-add --user admin --role admin --tenant admin \rm -f $HOME/keystone_admin.sh echo "export OS_USERNAME=admin" > $HOME/keystone_admin.sh echo "export OS_TENANT_NAME=admin" >> $HOME/keystone_admin.sh echo "export OS_PASSWORD=ADMIN_PASS" >> $HOME/keystone_admin.sh echo "export OS_AUTH_URL=http://90.147.77.40:5000/v2.0/" >> $HOME/keystone_admin.sh keystone tenant-create --name services --description "Services Tenant"
In order to check that the Keystone service is well installed, copy the keystone_admin.sh
script you've just created to another machine, even your desktop. Install on it the Python Keystone's command line (yum -y install python-keystoneclient
); then source the script keystone_admin.sh
and try the command:
$ keystone user-list +----------------------------------+-------+---------+-------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------+ | c91e623581374e7397c30a85f7a3e462 | admin | True | | +----------------------------------+-------+---------+-------+
It's better to do this on both controller nodes.
See origin of the problem here.
Create the file /usr/local/bin/keystone_token_flush.sh
:
cat << EOF >> /usr/local/bin/keystone_token_flush.sh #!/bin/sh logger -t keystone-cleaner "Starting token cleanup" /usr/bin/keystone-manage -v -d token_flush logger -t keystone-cleaner "Ending token cleanup" EOF
Create the file /etc/logrotate.d/keystone_token_flush
to rotate the log:
cat << EOF >> /etc/logrotate.d/keystone_token_flush compress /var/log/keystone_token_flush.log { weekly rotate 4 missingok compress minsize 100k } EOF
Execute:
cat << EOF > /etc/cron.d/keystone_token_flush 0 7 * * * root /usr/local/bin/keystone_token_flush.sh >> /var/log/keystone_token_flush.log 2>&1 EOF chmod +x /usr/local/bin/keystone_token_flush.sh chmod 0644 /etc/cron.d/keystone_token_flush
Define the TCP port range allowed for inter-node communication (this is needed for cluster mode of RabbitMQ)
\rm -f /etc/rabbitmq/rabbitmq.config cat << EOF >> /etc/rabbitmq/rabbitmq.config [{kernel, [ {inet_dist_listen_min, 9100}, {inet_dist_listen_max, 9110} ]}]. EOF
Start and enable Rabbit
service rabbitmq-server start chkconfig rabbitmq-server on
You should see an output like this in the file /var/log/rabbitmq/startup_log
:
RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /var/log/rabbitmq/rabbit@openstack1.log ###### ## /var/log/rabbitmq/rabbit@openstack1-sasl.log ########## Starting broker... completed with 0 plugins.
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh
that you created above:
source keystone_admin.sh
Then create the Glance user and image service in the Keystone's database:
keystone user-create --name glance --pass GLANCE_PASS keystone user-role-add --user glance --role admin --tenant services keystone service-create --name glance --type image --description "Glance Image Service" keystone endpoint-create --service glance --publicurl "http://90.147.77.40:9292" --adminurl "http://90.147.77.40:9292" --internalurl "http://192.168.60.40:9292"
Login into the primary controller node, modify the relevant configuration files:
glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection "mysql://glance:<GLANCE_DB_PWD>@192.168.60.10/glance" openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_idle_timeout 30 openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor "keystone+cachemanagement" openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://192.168.60.40:35357/v2.0 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password GLANCE_PASS openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 0.0.0.0 openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host 192.168.60.40 # TO CHANGE IN FUTURE (IceHouse) when they've fixed the messaging in glance (by including the oslo framework) openstack-config --set /etc/glance/glance-api.conf DEFAULT notifier_strategy noop # The following parameter should equals the CPU number openstack-config --set /etc/glance/glance-api.conf DEFAULT workers 4
glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection "mysql://glance:<GLANCE_DB_PWD>@192.168.60.10/glance" openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_idle_timeout 30 openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://192.168.60.40:35357/v2.0 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password GLANCE_PASS openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 0.0.0.0
While still logged into the primary controller node, prepare the paths:
mkdir -p /var/run/glance /var/log/glance chown -R glance /var/log/glance chown -R glance /var/run/glance chown -R glance:glance /var/lib/glance
… and initialize the Glance's database:
su glance -s /bin/sh -c "glance-manage db_sync"
If you get an error like this:
2013-12-20 09:38:23.855 2002 TRACE glance (self.version, startver)) 2013-12-20 09:38:23.855 2002 TRACE glance InvalidVersionError: 5 is not 6 2013-12-20 09:38:23.855 2002 TRACE glance
just execute the same command once again.
To prevent unprivileged users to register public image, add the following policy in /etc/glance/policy.json
:
"publicize_image": "role:admin"
Always sitting on the primary controller node, start and enable the Glance services:
service openstack-glance-registry start service openstack-glance-api start chkconfig openstack-glance-registry on chkconfig openstack-glance-api on
… and finally create the credential file for glance
cat << EOF > glancerc export OS_USERNAME=glance export OS_TENANT_NAME=services export OS_PASSWORD=GLANCE_PASS export OS_AUTH_URL=http://192.168.60.40:35357/v2.0/ EOF
You can copy the credential file to any machine you like, where you've installed the Python Glance's command line (yum -y install python-glanceclient
). From this machine you can access the Glance service (list images, create images, delete images, etc.).
In order to check that Glance is correctly installed, login into any machines where you've installed the Glance's command line and source the glancerc
script that you've copied from the primary controller node; then try these commands:
[root@lxadorigo ~]# wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img [...] Saving to: “cirros-0.3.1-x86_64-disk.img” [...] 2013-12-06 12:25:03 (3.41 MB/s) - “cirros-0.3.1-x86_64-disk.img” saved [13147648/13147648] [root@lxadorigo ~]# glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=True < cirros-0.3.1-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | d972013792949d0d3ba628fbe8685bce | | container_format | bare | | created_at | 2013-12-06T11:25:04 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 7cc84fd0-fa20-485f-86e9-c0d4015bacd5 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 4d7df634c2a7445c975c4fabcaced0e0 | | protected | False | | size | 13147648 | | status | active | | updated_at | 2013-12-06T11:25:04 | +------------------+--------------------------------------+ [root@lxadorigo ~]# glance index ID Name Disk Format Container Format Size ------------------------------------ ------------------------------ -------------------- -------------------- -------------- 7cc84fd0-fa20-485f-86e9-c0d4015bacd5 cirros qcow2 bare 13147648
Glance uses much space to cache images. This cache needs to be periodically cleaned to avoid running out of disk space.
In both controller nodes, edit the file /etc/cron.d/glance_cache_purger
by executing the following commands:
cat << EOF > /etc/cron.d/glance_cache_purger 0 8 * * * root /usr/bin/glance-cache-pruner >> /var/log/glance_cache_purger.log 2>&1 EOF
Then create the cleaner script's log rotate /etc/logrotate.d/glance_cache_purger
:
cat << EOF > /etc/logrotate.d/glance_cache_purger compress /var/log/glance_cache_purger.log { weekly rotate 4 missingok compress minsize 100k } EOF
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh that you created above:
source keystone_admin.sh
Add NOVA service, user and endpoint to Keystone's database:
keystone user-create --name nova --pass NOVA_PASS keystone user-role-add --user nova --role admin --tenant services keystone service-create --name nova --type compute --description "OpenStack Compute Service" SERVICE_NOVA_ID=`keystone service-list|grep nova|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NOVA_ID \ --publicurl http://90.147.77.40:8774/v2/%\(tenant_id\)s \ --adminurl http://90.147.77.40:8774/v2/%\(tenant_id\)s \ --internalurl http://192.168.60.40:8774/v2/%\(tenant_id\)s keystone service-create --name nova_ec2 --type ec2 --description "EC2 Service" SERVICE_EC2_ID=`keystone service-list|grep nova_ec2|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_EC2_ID \ --publicurl http://90.147.77.40:8773/services/Cloud \ --adminurl http://90.147.77.40:8773/services/Admin \ --internalurl http://192.168.60.40:8773/services/Cloud
Login into the primary controller node and modify the relevant configuration files:
nova.conf:
openstack-config --set /etc/nova/nova.conf \ database connection "mysql://nova:<NOVA_DB_PWD>@192.168.60.10/nova" openstack-config --set /etc/nova/nova.conf database idle_timeout 30 openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu openstack-config --set /etc/nova/nova.conf \ DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672 openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_ha_queues True openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.60.40 openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.60.40 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 90.147.77.40 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.60.40 openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT api_paste_config /etc/nova/api-paste.ini openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers 192.168.60.41:11211,192.168.60.44:11211 openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata openstack-config --set /etc/nova/nova.conf DEFAULT ec2_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf DEFAULT ec2_listen_port 8773 openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0 openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
api-paste.ini:
openstack-config --set /etc/nova/api-paste.ini \ filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host 192.168.60.40 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_port 35357 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol http openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_uri http://192.168.60.40:5000/v2.0 openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password NOVA_PASS
While still logged into the primary controller node, initialize the database:
su nova -s /bin/sh -c "nova-manage db sync"
Modify the file /etc/nova/policy.json
so that users can manage only their VMs:
# diff -c /etc/nova/policy.json /etc/nova/policy.json.orig *** /etc/nova/policy.json 2014-06-03 12:17:38.313909830 +0200 --- /etc/nova/policy.json.orig 2014-06-03 12:16:27.573839167 +0200 *************** *** 1,8 **** { "context_is_admin": "role:admin", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", ! "admin_or_user": "is_admin:True or user_id:%(user_id)s", ! "default": "rule:admin_or_user", "cells_scheduler_filter:TargetCellFilter": "is_admin:True", --- 1,7 ---- { "context_is_admin": "role:admin", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", ! "default": "rule:admin_or_owner", "cells_scheduler_filter:TargetCellFilter": "is_admin:True", *************** *** 10,16 **** "compute:create:attach_network": "", "compute:create:attach_volume": "", "compute:create:forced_host": "is_admin:True", - "compute:get": "rule:admin_or_owner", "compute:get_all": "", "compute:get_all_tenants": "", "compute:unlock_override": "rule:admin_api", --- 9,14 ----
… and turn ON and enable the nova services:
service openstack-nova-api start service openstack-nova-cert start service openstack-nova-consoleauth start service openstack-nova-scheduler start service openstack-nova-conductor start service openstack-nova-novncproxy start chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-conductor on chkconfig openstack-nova-novncproxy on
From your desktop, or wherever you've copied the keystone_admin.sh
and installed the NOVA's command line, try to execute:
bash-4.1$ nova service-list +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | nova-consoleauth | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-02-17T14:11:24.000000 | None | | nova-conductor | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-02-17T14:11:24.000000 | None | | nova-scheduler | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-02-17T14:11:23.000000 | None | | nova-cert | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-02-17T14:11:24.000000 | None | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ bash-4.1$ nova availability-zone-list +-----------------------------------+----------------------------------------+ | Name | Status | +-----------------------------------+----------------------------------------+ | internal | available | | |- controller-01.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2014-02-17T14:12:04.000000 | | | |- nova-consoleauth | enabled :-) 2014-02-17T14:12:04.000000 | | | |- nova-scheduler | enabled :-) 2014-02-17T14:12:03.000000 | | | |- nova-cert | enabled :-) 2014-02-17T14:12:04.000000 | +-----------------------------------+----------------------------------------+ bash-4.1$ nova endpoints +-------------+----------------------------------+ | glance | Value | +-------------+----------------------------------+ | adminURL | http://192.168.60.40:9292 | | region | regionOne | | publicURL | http://192.168.60.40:9292 | | internalURL | http://192.168.60.40:9292 | | id | 62364ed9384d4231b09841901d415e5a | +-------------+----------------------------------+ +-------------+---------------------------------------------------------------+ | nova | Value | +-------------+---------------------------------------------------------------+ | adminURL | http://192.168.60.40:8774/v2/9de88ad06ed64bbb8f721711bf4d7bd8 | | region | regionOne | | id | 190cb5922b2f4ede868a328003422322 | | serviceName | nova | | internalURL | http://192.168.60.40:8774/v2/9de88ad06ed64bbb8f721711bf4d7bd8 | | publicURL | http://192.168.60.40:8774/v2/9de88ad06ed64bbb8f721711bf4d7bd8 | +-------------+---------------------------------------------------------------+ +-------------+----------------------------------+ | keystone | Value | +-------------+----------------------------------+ | adminURL | http://192.168.60.40:35357/v2.0 | | region | regionOne | | publicURL | http://192.168.60.40:5000/v2.0 | | internalURL | http://192.168.60.40:5000/v2.0 | | id | 511771b79f3946c5901c48d72e9a324c | +-------------+----------------------------------+
Even better if the above commands can be tried from your desktop, after sourc-ing the keystone_admin.sh
.
usermod -s /bin/bash nova mkdir -p -m 700 ~nova/.ssh chown nova.nova ~nova/.ssh su - nova cd .ssh ssh-keygen -f id_rsa -b 1024 -P "" cp id_rsa.pub authorized_keys cat << EOF >> config Host * StrictHostKeyChecking no UserKnownHostsFile=/dev/null EOF
Distribute the content of ~nova/.ssh
in the second controller node and in all the compute nodes.
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh that you created above:
source ~/keystonerc_admin
Then, create the endpoint, service and user information in the Keystone's database for Neutron:
keystone user-create --name neutron --pass NEUTRON_PASS keystone user-role-add --user neutron --role admin --tenant services keystone service-create --name neutron --type network --description "OpenStack Networking Service" SERVICE_NEUTRON_ID=`keystone service-list|grep neutron|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NEUTRON_ID \ --publicurl "http://90.147.77.40:9696" \ --adminurl "http://90.147.77.40:9696" \ --internalurl "http://192.168.60.40:9696"
Login into the primary controller node and modify the configuration files.
neutron.conf:
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS openstack-config --set /etc/neutron/neutron.conf \ keystone_authtoken auth_url http://192.168.60.40:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf \ keystone_authtoken auth_uri http://192.168.60.40:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672 openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_ha_queues True openstack-config --set /etc/neutron/neutron.conf \ DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 openstack-config --set /etc/neutron/neutron.conf \ agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" openstack-config --set /etc/neutron/neutron.conf \ database connection "mysql://neutron:<NEUTRON_DB_PWD>@192.168.60.10/neutron" openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose False openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2 openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 86400 openstack-config --set /etc/neutron/neutron.conf DEFAULT agent_down_time 75 openstack-config --set /etc/neutron/neutron.conf agent report_interval 30
api-paste.ini:
openstack-config --set /etc/neutron/api-paste.ini \ filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password NEUTRON_PASS
ovs_neutron_plugin.ini:
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tenant_network_type gre openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs enable_tunneling True openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \ securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
nova.conf:
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://192.168.60.40:9696 openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name services openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password NEUTRON_PASS openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://192.168.60.40:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
While still logged into the primary controller node, configure the OVS plugin.
cd /etc/neutron ln -s plugins/openvswitch/ovs_neutron_plugin.ini plugin.ini cd -
… and restart NOVA's services (as you've just modified its configuration file)
service openstack-nova-api restart service openstack-nova-scheduler restart service openstack-nova-conductor restart
While still logged into the primary controller node, start and enable the Neutron server:
neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini stamp head
It's output should be like:
No handlers could be found for logger "neutron.common.legacy" INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL.
Now start neutron-server
service neutron-server start chkconfig neutron-server on
Login into the primary controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh
that you created above:
source ~/keystonerc_admin
Then, create the endpoint, service and user information in the Keystone's database for Cinder:
keystone user-create --name cinder --pass CINDER_PASS keystone user-role-add --user cinder --role admin --tenant services keystone service-create --name cinder --type volume --description "Cinder Volume Service" keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2" keystone endpoint-create --service cinder --publicurl http://90.147.77.40:8776/v1/%\(tenant_id\)s --internalurl http://192.168.60.40:8776/v1/%\(tenant_id\)s --adminurl http://90.147.77.40:8776/v1/%\(tenant_id\)s keystone endpoint-create --service cinderv2 --publicurl http://90.147.77.40:8776/v2/%\(tenant_id\)s --internalurl http://192.168.60.40:8776/v2/%\(tenant_id\)s --adminurl http://90.147.77.40:8776/v2/%\(tenant_id\)s
Login into the primary controller node and modify the configuration files.
cinder.conf:
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password CINDER_PASS openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu openstack-config --set /etc/cinder/cinder.conf DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672 openstack-config --set /etc/cinder/cinder.conf DEFAULT rabbit_ha_queues True openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_idle_timeout 30 #openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen 192.168.60.40 openstack-config --set /etc/cinder/cinder.conf DEFAULT rootwrap_config /etc/cinder/rootwrap.conf openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_connection "mysql://cinder:<CINDER_DB_PWD>@192.168.60.10/cinder"
Initialize the Cinder database:
su cinder -s /bin/sh -c "cinder-manage db sync"
And finally start API services:
service openstack-cinder-api start chkconfig openstack-cinder-api on service openstack-cinder-scheduler start chkconfig openstack-cinder-scheduler on
Modify the file /etc/openstack-dashboard/local_settings
: look for the CACHES string, and substitute whatever is there with:
CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '192.168.60.41:11211', } }
you can try this command:
sed -i "s+django\.core\.cache\.backends\.locmem\.LocMemCache'+django\.core\.cache\.backends\.memcached\.MemcachedCache',\n\t'LOCATION' : '192.168.60.41:11211',+" /etc/openstack-dashboard/local_settings
Note that the TCP port 11211 and IP address must match those ones contained in the file /etc/sysconfig/memcached
:
PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 192.168.60.41"
Now, look for the string OPENSTACK_HOST
; set it:
OPENSTACK_HOST = "192.168.60.40"
by executing this command:
sed -i 's+OPENSTACK_HOST = "127.0.0.1"+OPENSTACK_HOST = "192.168.60.40"+' /etc/openstack-dashboard/local_settings
Modify the ALLOWED_HOST
parameter:
ALLOWED_HOSTS = ['*']
by executing the command
sed -i "s+ALLOWED_HOSTS = .*+ALLOWED_HOSTS = ['*']+" /etc/openstack-dashboard/local_settings
Execute the following commands:
sed -i 's+^Listen.*+Listen 90.147.77.41:80+' /etc/httpd/conf/httpd.conf echo "ServerName cloud-areapd.pd.infn.it:80" >> /etc/httpd/conf/httpd.conf echo "RedirectMatch permanent ^/$ /dashboard/" >> /etc/httpd/conf.d/openstack-dashboard.conf echo "RedirectMatch ^/$ /dashboard/" > /etc/httpd/conf.d/rootredirect.conf
To address an observed problem related to number of open files execute the following command on both controller nodes:
cat << EOF >> /etc/security/limits.conf * soft nofile 4096 * hard nofile 4096 EOF
Start and enable the WebServer:
service httpd start service memcached start chkconfig httpd on chkconfig memcached on
Please, do not consider this configuration as optional. It should be done in order to crypt the users' passwords.
Install the mod_ssl
package on both controller nodes:
yum -y install mod_ssl
Execute the following commands:
#sed -i 's+^Listen.*+Listen 8443+' /etc/httpd/conf.d/ssl.conf #sed -i 's+VirtualHost _default_:443+VirtualHost _default_:8443+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateFile.*+SSLCertificateFile /etc/grid-security/hostcert.pem+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateKeyFile.*+SSLCertificateKeyFile /etc/grid-security/hostkey.pem+' /etc/httpd/conf.d/ssl.conf echo "RewriteEngine On" >> /etc/httpd/conf/httpd.conf echo "RewriteCond %{HTTPS} !=on" >> /etc/httpd/conf/httpd.conf echo "RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/\$1 [R,L]" >> /etc/httpd/conf/httpd.conf
Restart httpd:
service httpd restart
You can stop here if you don't need the High Availability with the second node neither the SSL support.
Login into the secondary controller node and configure the RabbitMQ to use the already specified TCP port range:
\rm -f /etc/rabbitmq/rabbitmq.config cat << EOF >> /etc/rabbitmq/rabbitmq.config [{kernel, [ {inet_dist_listen_min, 9100}, {inet_dist_listen_max, 9110} ]}]. EOF
While still logged into the secondary controller node, start and enable Rabbit:
service rabbitmq-server start chkconfig rabbitmq-server on
This first start has generated the erlang cookie. Then stop the server:
service rabbitmq-server stop
RabbitMQ's clustering requires that the nodes have the same Erlang cookie… then copy erlang cookie from the primary node and restart the server:
scp root@controller-01.cloud.pd.infn.it:/var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie
Change cookie's ownership and restart the rabbit server
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie service rabbitmq-server start
While logged into the secondary controller node, stop the application:
rabbitmqctl stop_app rabbitmqctl reset
… then join the server running in the primary node:
rabbitmqctl join_cluster rabbit@controller-01 Clustering node 'rabbit@controller-02' with 'rabbit@controller-01' ... ...done. rabbitmqctl start_app Starting node 'rabbit@controller-02' ... ...done. # see: http://goo.gl/y0aVmp rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
[root@controller-02 ~]# rabbitmqctl cluster_status Cluster status of node 'rabbit@controller-02' ... [{nodes,[{disc,['rabbit@controller-01','rabbit@controller-02']}]}, {running_nodes,['rabbit@controller-01','rabbit@controller-02']}, {partitions,[]}] ...done. [root@controller-02 ~]# rabbitmqctl list_policies Listing policies ... / HA ^(?!amq\\.).* {"ha-mode":"all"} 0 ...done. [root@controller-01 ~]# rabbitmqctl cluster_status Cluster status of node 'rabbit@controller-01' ... [{nodes,[{disc,['rabbit@controller-01','rabbit@controller-02']}]}, {running_nodes,['rabbit@controller-02','rabbit@controller-01']}, {partitions,[]}] ...done. [root@controller-02 ~]# rabbitmqctl list_policies Listing policies ... / HA ^(?!amq\\.).* {"ha-mode":"all"} 0 ...done.
Login into the secondary controller node; copy Keystone, Glance, Nova, Neutron, Cinder and Horizon's configurations from primary controller node:
scp controller-01.cloud.pd.infn.it:/etc/openstack-dashboard/local_settings /etc/openstack-dashboard/ scp -r controller-01.cloud.pd.infn.it:/etc/keystone /etc/ scp -r controller-01.cloud.pd.infn.it:/etc/neutron /etc/ scp -r controller-01.cloud.pd.infn.it:/etc/cinder /etc/ scp -r controller-01.cloud.pd.infn.it:/etc/glance /etc/ scp -r controller-01.cloud.pd.infn.it:/etc/nova /etc/ scp controller-01.cloud.pd.infn.it:/etc/sysconfig/memcached /etc/sysconfig/ \rm -f /etc/neutron/plugin.ini ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini
While still logged into the secondary controller node, finalize the setup:
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone mkdir -p /var/run/glance /var/log/glance mkdir -p /var/run/keystone /var/log/keystone chown -R glance:glance /var/log/glance /var/lib/glance /var/run/glance chown -R keystone:keystone /var/run/keystone /var/log/keystone /var/lib/keystone /etc/keystone/ssl/ chown -R neutron:neutron /var/lib/neutron
… Dashboard's parameters (be careful to use the correct IP; above we've used 90.147.77.41, now .44 must be used):
sed -i 's+^Listen.*+Listen 90.147.77.44:80+' /etc/httpd/conf/httpd.conf echo "ServerName cloud-areapd.pd.infn.it:80" >> /etc/httpd/conf/httpd.conf echo "RedirectMatch permanent ^/$ /dashboard/" >> /etc/httpd/conf.d/openstack-dashboard.conf echo "RedirectMatch ^/$ /dashboard/" > /etc/httpd/conf.d/rootredirect.conf
… HTTPS for Dashboard:
#sed -i 's+^Listen.*+Listen 8443+' /etc/httpd/conf.d/ssl.conf #sed -i 's+VirtualHost _default_:443+VirtualHost _default_:8443+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateFile.*+SSLCertificateFile /etc/grid-security/hostcert.pem+' /etc/httpd/conf.d/ssl.conf sed -i 's+^SSLCertificateKeyFile.*+SSLCertificateKeyFile /etc/grid-security/hostkey.pem+' /etc/httpd/conf.d/ssl.conf echo "RewriteEngine On" >> /etc/httpd/conf/httpd.conf echo "RewriteCond %{HTTPS} !=on" >> /etc/httpd/conf/httpd.conf echo "RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/\$1 [R,L]" >> /etc/httpd/conf/httpd.conf
… increase the number of allowed open files:
cat << EOF >> /etc/security/limits.conf * soft nofile 4096 * hard nofile 4096 EOF
… change the memcached's listening IP address
sed -i 's+192.168.60.41+192.168.60.44+' /etc/sysconfig/memcached
… change the location of the memcached service in the dashboard's config file:
sed -i 's+192.168.60.41:11211+192.168.60.44:11211+' /etc/openstack-dashboard/local_settings
… and finally turn all services ON, and enable them:
service openstack-keystone start service openstack-glance-registry start service openstack-glance-api start service openstack-nova-api start service openstack-nova-cert start service openstack-nova-consoleauth start service openstack-nova-scheduler start service openstack-nova-conductor start service openstack-nova-novncproxy start service neutron-server start service httpd start service memcached start service openstack-cinder-api start service openstack-cinder-scheduler start chkconfig openstack-keystone on chkconfig openstack-glance-registry on chkconfig openstack-glance-api on chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-conductor on chkconfig openstack-nova-novncproxy on chkconfig neutron-server on chkconfig httpd on chkconfig memcached on chkconfig openstack-cinder-api on chkconfig openstack-cinder-scheduler on
On your desktop, source the file keystone_admin.sh
and try the commands:
bash-4.1$ nova availability-zone-list +-----------------------------------+----------------------------------------+ | Name | Status | +-----------------------------------+----------------------------------------+ | internal | available | | |- controller-02.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2014-02-17T14:17:01.000000 | | | |- nova-consoleauth | enabled :-) 2014-02-17T14:17:10.000000 | | | |- nova-scheduler | enabled :-) 2014-02-17T14:17:10.000000 | | | |- nova-cert | enabled :-) 2014-02-17T14:17:01.000000 | | |- controller-01.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2014-02-17T14:17:04.000000 | | | |- nova-consoleauth | enabled :-) 2014-02-17T14:17:04.000000 | | | |- nova-scheduler | enabled :-) 2014-02-17T14:17:03.000000 | | | |- nova-cert | enabled :-) 2014-02-17T14:17:04.000000 | +-----------------------------------+----------------------------------------+ bash-4.1$ cinder service-list +------------------+--------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated_at | +------------------+--------------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller-01.cloud.pd.infn.it | nova | enabled | up | 2014-05-09T09:54:29.000000 | | cinder-scheduler | controller-02.cloud.pd.infn.it | nova | enabled | up | 2014-05-09T09:54:21.000000 | +------------------+--------------------------------+------+---------+-------+----------------------------+
First of all, on both controller nodes, switch off all the OpenStack's services but Keystone (also do not stop memcached
):
service openstack-glance-registry stop service openstack-glance-api stop service openstack-nova-api stop service openstack-nova-cert stop service openstack-nova-consoleauth stop service openstack-nova-scheduler stop service openstack-nova-conductor stop service openstack-nova-novncproxy stop service neutron-server stop service httpd stop service openstack-cinder-api stop service openstack-cinder-scheduler stop
Before proceed note that
hostcert.pem
and hostkey.pem
files must be concatenated (with the cat
command) to create the unique file hostcertkey.pem
To upgrade to HAProxy 1.5.x in a SL6/CentOS6 execute the following commands:
wget --no-check-certificate --user=cloud_pd --ask-password https://ci-01.cnaf.infn.it/igi-mw/generic_sources/cloud_pd/rpms/haproxy.repo mv haproxy.repo /etc/yum.repos.d yum clean all yum -y update haproxy
Modify haproxy.cfg
's content in the haproxy nodes, by substituting the lines listed above with the following (do not modify the two sections global
and defaults
):
listen dashboard_public_ssl bind 90.147.77.40:443 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:443 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:443 check inter 2000 rise 2 fall 3 listen nova_metadata_server bind 192.168.60.40:8775 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8775 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8775 check inter 2000 rise 2 fall 3 listen glance_registry bind 192.168.60.40:9191 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:9191 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9191 check inter 2000 rise 2 fall 3 listen rabbitmq-server bind 192.168.60.40:5672 balance roundrobin mode tcp server controller-01.cloud.pd.infn.it 192.168.60.41:5672 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:5672 check inter 2000 rise 2 fall 3 listen epmd bind 192.168.60.40:4369 balance roundrobin server controller-01.cloud.pd.infn.it 192.168.60.41:4369 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:4369 check inter 2000 rise 2 fall 3 listen nova_memcached_cluster bind 192.168.60.40:11211 balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:11211 check inter 2000 rise 2 fall 5 server controller-02.cloud.pd.infn.it 192.168.60.44:11211 check inter 2000 rise 2 fall 5 frontend keystone-admin_pub bind 90.147.77.40:35357 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystone-admin frontend keystone-public_pub bind 90.147.77.40:5000 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend keystone-public frontend glanceapi bind 192.168.60.40:9292 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend glanceapi frontend glanceapi_pub bind 90.147.77.40:9292 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend glanceapi frontend novaapi bind 192.168.60.40:8774 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novaapi frontend novaapi_pub bind 90.147.77.40:8774 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novaapi frontend cinder bind 192.168.60.40:8776 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend cinderapi frontend cinder_pub bind 90.147.77.40:8776 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend cinderapi frontend neutron bind 192.168.60.40:9696 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend neutronapi frontend neutron_pub bind 90.147.77.40:9696 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend neutronapi frontend novnc_pub bind 90.147.77.40:6080 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend novnc frontend ec2_api bind 192.168.60.40:8773 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend ec2api frontend ec2_api_pub bind 90.147.77.40:8773 ssl crt /etc/grid-security/hostcertkey.pem ca-file /etc/grid-security/certificates/INFN-CA-2006.pem mode http option httpclose option forwardfor reqadd X-Forwarded-Proto:\ https default_backend ec2api backend keystone-admin mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:35357 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:35357 check inter 2000 rise 2 fall 3 backend keystone-public mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:5000 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:5000 check inter 2000 rise 2 fall 3 backend glanceapi mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:9292 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9292 check inter 2000 rise 2 fall 3 backend novaapi mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:8774 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8774 check inter 2000 rise 2 fall 3 backend ec2api mode http balance source option tcpka option tcplog server controller-01.cloud.pd.infn.it 192.168.60.41:8773 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8773 check inter 2000 rise 2 fall 3 backend cinderapi mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:8776 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:8776 check inter 2000 rise 2 fall 3 backend neutronapi mode http balance source option httpchk server controller-01.cloud.pd.infn.it 192.168.60.41:9696 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:9696 check inter 2000 rise 2 fall 3 backend novnc mode http balance source server controller-01.cloud.pd.infn.it 192.168.60.41:6080 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:6080 check inter 2000 rise 2 fall 3 backend dashboard mode http balance source server controller-01.cloud.pd.infn.it 192.168.60.41:80 check inter 2000 rise 2 fall 3 server controller-02.cloud.pd.infn.it 192.168.60.44:80 check inter 2000 rise 2 fall 3
… and restart the HAProxy daemon.
Now login into one of the two controller nodes, and do a precautionary unset all the OS_* variables:
unset OS_USERNAME unset OS_TENANT_NAME unset OS_PASSWORD unset OS_AUTH_URL
To get back access to Keystone issue the following commands:
export SERVICE_TOKEN=`cat ~/ks_admin_token` export SERVICE_ENDPOINT=http://192.168.60.41:35357/v2.0
~/ks_admin_token
has been created above at the very first Keystone's configuration.Change Keystone's endpoints:
KEYSTONE_SERVICE=$(keystone service-get keystone | grep ' id ' | awk '{print $4}') KEYSTONE_ENDPOINT=$(keystone endpoint-list | grep $KEYSTONE_SERVICE|awk '{print $2}') keystone endpoint-delete $KEYSTONE_ENDPOINT keystone endpoint-create --region regionOne --service-id $KEYSTONE_SERVICE --publicurl "https://cloud-areapd.pd.infn.it:\$(public_port)s/v2.0" --adminurl "https://cloud-areapd.pd.infn.it:\$(admin_port)s/v2.0" --internalurl "https://cloud-areapd.pd.infn.it:\$(public_port)s/v2.0"
Note: no need to restart keystone
because its communication is still not encrypted; the encription only occurs in the HAProxy frontend.
Change the keystone_admin.sh
you created above:
sed -i 's+export OS_AUTH_URL+#export OS_AUTH_URL+' $HOME/keystone_admin.sh echo "export OS_AUTH_URL=https://cloud-areapd.pd.infn.it:5000/v2.0/" >> $HOME/keystone_admin.sh echo "export OS_CACERT=/etc/grid-security/certificates/INFN-CA-2006.pem" >> $HOME/keystone_admin.sh
Note: no need to do anything on the second controller node. The endpoint with the https url has been changed on the MySQL database; all this is transparent for the keystone
service.
unset SERVICE_TOKEN unset SERVICE_ENDPOINT source $HOME/keystone_admin.sh keystone user-list
Modify authentication parameters on both controller nodes:
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol https openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol https openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem
Execute this on one controller node only (or where you have the file keystone_admin.sh
):
source ~/keystone_admin.sh GLANCE_SERVICE=$(keystone service-get glance | grep ' id ' | awk '{print $4}') GLANCE_ENDPOINT=$(keystone endpoint-list | grep $GLANCE_SERVICE|awk '{print $2}') keystone endpoint-delete $GLANCE_ENDPOINT keystone endpoint-create --service glance --publicurl "https://cloud-areapd.pd.infn.it:9292" --adminurl "https://cloud-areapd.pd.infn.it:9292" --internalurl "https://cloud-areapd.pd.infn.it:9292"
Restart Glance on both controller nodes:
service openstack-glance-api restart service openstack-glance-registry restart
Modify authentication parameters on both controller nodes:
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol https openstack-config --set /etc/nova/nova.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT neutron_ca_certificates_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT cinder_ca_certificates_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT glance_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/nova.conf DEFAULT glance_protocol https openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers https://cloud-areapd.pd.infn.it:9292 openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_insecure true openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url https://cloud-areapd.pd.infn.it:9696 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol https openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_uri https://cloud-areapd.pd.infn.it:5000/v2.0
On one controller node only (or where you have keystone_admin.sh
):
NOVA_SERVICE=$(keystone service-get nova | grep ' id ' | awk '{print $4}') NOVA_ENDPOINT=$(keystone endpoint-list | grep $NOVA_SERVICE|awk '{print $2}') keystone endpoint-delete $NOVA_ENDPOINT keystone endpoint-create --service-id $NOVA_SERVICE --publicurl https://cloud-areapd.pd.infn.it:8774/v2/%\(tenant_id\)s --adminurl https://cloud-areapd.pd.infn.it:8774/v2/%\(tenant_id\)s --internalurl https://cloud-areapd.pd.infn.it:8774/v2/%\(tenant_id\)s NOVAEC2_SERVICE=$(keystone service-get nova_ec2 | grep ' id ' | awk '{print $4}') NOVAEC2_ENDPOINT=$(keystone endpoint-list | grep $NOVAEC2_SERVICE|awk '{print $2}') keystone endpoint-delete $NOVAEC2_ENDPOINT keystone endpoint-create --service-id $NOVAEC2_SERVICE --publicurl https://cloud-areapd.pd.infn.it:8773/services/Cloud --adminurl https://cloud-areapd.pd.infn.it:8773/services/Cloud --internalurl https://cloud-areapd.pd.infn.it:8773/services/Cloud
Restart Nova on both controller nodes:
service openstack-nova-api restart service openstack-nova-cert restart service openstack-nova-consoleauth restart service openstack-nova-scheduler restart service openstack-nova-conductor restart service openstack-nova-novncproxy restart
Modify authentication parameters on both controller nodes:
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol https openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url https://cloud-areapd.pd.infn.it:35357/v2.0
On one controller only (or where you have the keystone_admin.sh
):
NEUTRON_SERVICE=$(keystone service-get neutron | grep ' id ' | awk '{print $4}') NEUTRON_ENDPOINT=$(keystone endpoint-list | grep $NEUTRON_SERVICE|awk '{print $2}') keystone endpoint-delete $NEUTRON_ENDPOINT keystone endpoint-create --service-id $NEUTRON_SERVICE --publicurl "https://cloud-areapd.pd.infn.it:9696" --adminurl "https://cloud-areapd.pd.infn.it:9696" --internalurl "https://cloud-areapd.pd.infn.it:9696"
Restart Neutron and Nova on both controller nodes (nova needs to be restarted because its conf file has been changed):
service neutron-server restart service openstack-nova-api restart service openstack-nova-cert restart service openstack-nova-consoleauth restart service openstack-nova-scheduler restart service openstack-nova-conductor restart service openstack-nova-novncproxy restart
Modify authentication parameters on both controller nodes:
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol https openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri https://cloud-areapd.pd.infn.it:5000/v2.0 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem
On one controller only (or where you have the keystone_admin.sh
):
CINDER_SERVICE=$(keystone service-get cinder | grep ' id ' | awk '{print $4}') CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk '{print $2}') keystone endpoint-delete $CINDER_ENDPOINT CINDER_SERVICE=$(keystone service-get cinderv2 | grep ' id ' | awk '{print $4}') CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk '{print $2}') keystone endpoint-delete $CINDER_ENDPOINT keystone endpoint-create --service cinder --publicurl https://cloud-areapd.pd.infn.it:8776/v1/%\(tenant_id\)s --adminurl https://cloud-areapd.pd.infn.it:8776/v1/%\(tenant_id\)s --internalurl https://cloud-areapd.pd.infn.it:8776/v1/%\(tenant_id\)s keystone endpoint-create --service cinderv2 --publicurl https://cloud-areapd.pd.infn.it:8776/v2/%\(tenant_id\)s --adminurl https://cloud-areapd.pd.infn.it:8776/v2/%\(tenant_id\)s --internalurl https://cloud-areapd.pd.infn.it:8776/v2/%\(tenant_id\)s
Restart Cinder on both controller nodes:
service openstack-cinder-api restart service openstack-cinder-scheduler restart
Setup secure connection to Keystone:
sed -i 's+OPENSTACK_HOST = "192.168.60.40"+OPENSTACK_HOST = "cloud-areapd.pd.infn.it"+' /etc/openstack-dashboard/local_settings sed -i 's+OPENSTACK_KEYSTONE_URL = "http:+OPENSTACK_KEYSTONE_URL = "https:+' /etc/openstack-dashboard/local_settings sed -i 's+# OPENSTACK_SSL_CACERT.*+OPENSTACK_SSL_CACERT="/etc/grid-security/certificates/INFN-CA-2006.pem"+' /etc/openstack-dashboard/local_settings
Prepare to patch Horizon's source files, or skip to "download and install Dashboard patched RPM":
yum install -y patch curl -o os_auth_patch_01.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_01.diff curl -o os_auth_patch_02.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_02.diff curl -o os_auth_patch_03.diff https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/os_auth_patch_03.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/views.py < os_auth_patch_01.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/backend.py < os_auth_patch_02.diff patch -R /usr/lib/python2.6/site-packages/openstack_auth/user.py < os_auth_patch_03.diff
Alternatively download and install Dashboard patched RPM:
TODO... (waiting for an 'official' RPM repo)
Restart apache web server:
service httpd restart
To address this bug, apply this patch, or follow the instructions below:
curl -o agent.py https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/agent.py mv /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py.bak cp agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py service neutron-server restart
See here