progetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temp
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
progetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temp [2015/01/23 08:47] – [Prerequisites] dorigoa@infn.it | progetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temp [2015/04/09 07:09] (current) – [Install OpenStack software on both nodes] dorigoa@infn.it | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Installation and Configuration of OpenStack Controller Node ====== | ||
+ | |||
+ | Author: | ||
+ | * Alvise Dorigo (INFN Padova) | ||
+ | |||
+ | Changes: | ||
+ | * 26/03/2015 - Added new Havana repo URL to download from (and command to fix it) | ||
+ | * 19/01/2015 - Restrict IP listening address for '' | ||
+ | * 18/09/2014 - Port renumber: 8443 -> 443 | ||
+ | * 18/09/2014 - Added instructions to upgrade HAProxy to version 1.5.x on SL6/CentOS6 | ||
+ | * 17/09/2014 - Added setup for glance cache' | ||
+ | * 15/09/2014 - Added workaround configuration in '' | ||
+ | * 25/07/2014 - Added haproxy' | ||
+ | * 30/06/2014 - Added instructions to increase the limit of open files (to address a Dashboard' | ||
+ | * 19/06/2014 - Integrated Massimo' | ||
+ | * 12/06/2014 - Added setting of parameter '' | ||
+ | * 11/06/2014 - Added '' | ||
+ | * 28/05/2014 - Added (optional) procedure to configure Keystone/ | ||
+ | * 28/05/2014 - Added pre-requisite about installed INFN CA | ||
+ | * 27/05/2014 - Added crontab entry to periodically flush expired keystone' | ||
+ | * 26/05/2014 - Added '' | ||
+ | * 23/05/2014 - Added '' | ||
+ | * 20/05/2014 - Added rabbitmq HA configuration ('' | ||
+ | * 13/05/2014 - Added reference links to HA (Mirantis' | ||
+ | * 09/05/2014 - Added instruction for installation and configuration of Cinder API | ||
+ | * 06/05/2014 - Added instructions to setup the Dashboard using the HTTPS | ||
+ | * 06/05/2014 - Added certificate and key installation as pre-requisite, | ||
+ | * 06/05/2014 - Added installation of '' | ||
+ | * 05/05/2014 - Using public Virtual IP ('' | ||
+ | * 05/05/2014 - Using public Virtual IP ('' | ||
+ | * 05/05/2014 - Added '' | ||
+ | * 30/04/2014 - SELinux is to set to Disabled | ||
+ | * 29/04/2014 - Added RedHat' | ||
+ | * 29/04/2014 - Added configuration of EC2 interface | ||
+ | * 23/04/2014 - Using "'' | ||
+ | * 22/04/2014 - Modified some IPs to reflect the actual cloud test farm | ||
+ | * 13/04/2014 - Added " | ||
+ | * 06/03/2014 - Changed '' | ||
+ | * 06/03/2014 - Added '' | ||
+ | * 03/03/2014 - Added memcached TCP port to iptables filters | ||
+ | * 03/03/2014 - Added memcached cluster configuration to '' | ||
+ | * 03/03/2014 - As Glance does not support '' | ||
+ | * 03/03/2014 - Added '' | ||
+ | * 27/02/2014 - Added instruction for installation and setup of Horizon on the secondary node | ||
+ | * 26/02/2014 - Added guide for installation of Horizon(Dashboard) on the primary node | ||
+ | * 25/02/2014 - Added '' | ||
+ | * 25/02/2014 - Added '' | ||
+ | * 25/02/2014 - Commented out some deep rabbit configuration from '' | ||
+ | ===== Reference Links ===== | ||
+ | * [[http:// | ||
+ | * [[https:// | ||
+ | * [[progetti: | ||
+ | * [[http:// | ||
+ | * [[http:// | ||
+ | |||
+ | Note: The HAProxy link above refers to a configuration for the highly available MySQL cluster. Below, it is explained how to configure HAProxy also for the OpenStack services. | ||
+ | ===== Prerequisites ===== | ||
+ | |||
+ | Two nodes with: | ||
+ | * Updated SL6/CentOS6 (6.4 or 6.5) | ||
+ | * Make sure that yum autoupdate is disabled | ||
+ | <code bash> | ||
+ | root@controller-01 ~]# grep ENA / | ||
+ | # ENABLED | ||
+ | ENABLED=" | ||
+ | </ | ||
+ | * At least 20GB HD for operating system and OpenStack software and related log files | ||
+ | * Dedicated remote storage mounted on ''/ | ||
+ | * SELinux configured as " | ||
+ | * EPEL 6-8 | ||
+ | * A MySQL (possibly a HA cluster) endpoint each OpenStack service can connect to (in this guide we're using our MySQL Percona cluster' | ||
+ | * A HAProxy/ | ||
+ | * In the three nodes running HAProxy and in the controller nodes, the service certificate and key must be installed and owned by root with the following permissions: | ||
+ | <code bash> | ||
+ | [root@ha-proxy-01 ~]# ll / | ||
+ | total 8 | ||
+ | -rw-r--r-- 1 root root 1476 May 6 16:59 hostcert.pem | ||
+ | -rw------- 1 root root 916 May 6 16:59 hostkey.pem | ||
+ | </ | ||
+ | * Installed CA INFN certificate in the all the nodes (OpenStack controller nodes and HAProxy nodes) | ||
+ | <code bash> | ||
+ | [root@controller-01 ~]# ll / | ||
+ | -rw-r--r--. 1 root root 1257 Mar 24 04:17 / | ||
+ | </ | ||
+ | ===== Configure IP tables to allow traffic through relevant TCP ports on both nodes ===== | ||
+ | Execute the following commands: | ||
+ | <code bash> | ||
+ | # allow traffic toward rabbitmq server | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 4369 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 35197 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100:9110 -j ACCEPT | ||
+ | # allow traffic toward keystone | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT | ||
+ | # allow traffic to glance-api | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT | ||
+ | # allow traffic to glance-registry | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9191 -j ACCEPT | ||
+ | # allow traffic to Nova EC2 API | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8773 -j ACCEPT | ||
+ | # allow traffic to Nova API | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT | ||
+ | # allow traffic to Nova Metadata server | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8775 -j ACCEPT | ||
+ | # allow traffic to Nova VNC proxy | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT | ||
+ | # allow traffic to Neutron Server | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT | ||
+ | # allow traffic to Dashboard | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT | ||
+ | # allow traffic to memcached | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 11211 -j ACCEPT | ||
+ | # allow traffic to Cinder API | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT | ||
+ | # permit ntpd's udp communications | ||
+ | iptables -A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT | ||
+ | |||
+ | mv / | ||
+ | iptables-save > / | ||
+ | chkconfig iptables on | ||
+ | chkconfig ip6tables off | ||
+ | service iptables restart | ||
+ | |||
+ | </ | ||
+ | Also the HAProxy nodes must allow traffic through the same TCP ports: | ||
+ | <code bash> | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 4369 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m tcp --dport 35197 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100:9110 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9191 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8773 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8775 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8776 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 443 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8004 -j ACCEPT | ||
+ | iptables -A INPUT -p tcp -m multiport --dports 8000 -j ACCEPT | ||
+ | # permit ntpd's udp communications | ||
+ | iptables -A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT | ||
+ | |||
+ | mv / | ||
+ | iptables-save > / | ||
+ | chkconfig iptables on | ||
+ | chkconfig ip6tables off | ||
+ | service iptables restart | ||
+ | </ | ||
+ | ===== Configure HAProxy ===== | ||
+ | The HAProxy nodes run the haproxy and keepalived daemons. HAProxy redirects connection from the external world to the controller nodes (users who want to connect to glance/ | ||
+ | |||
+ | This guide will assume (as mentioned above) that HAProxy has been already configured for the MySQL cluster, then only the additional part for OpenStack will be shown here. | ||
+ | |||
+ | Log into the HAProxy node(s) and put the following lines to the file(s) ''/ | ||
+ | <code bash> | ||
+ | global | ||
+ | chroot | ||
+ | log 127.0.0.1 | ||
+ | log 127.0.0.1 | ||
+ | maxconn 4096 | ||
+ | uid 188 | ||
+ | gid 188 | ||
+ | daemon | ||
+ | tune.ssl.default-dh-param 4096 | ||
+ | tune.maxrewrite 65536 | ||
+ | tune.bufsize 65536 | ||
+ | | ||
+ | defaults | ||
+ | log global | ||
+ | maxconn | ||
+ | option | ||
+ | retries | ||
+ | timeout | ||
+ | timeout | ||
+ | timeout | ||
+ | timeout | ||
+ | timeout | ||
+ | timeout | ||
+ | |||
+ | listen dashboard_public_ssl | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | | ||
+ | listen dashboard_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | | ||
+ | listen vnc | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen vnc_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen keystone_auth_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen keystone_api_public | ||
+ | bind 90.147.77.40: | ||
+ | balance | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen keystone_auth | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen keystone_api | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen glance_api | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen glance_api_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen glance_registry | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen novaec2-api | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen novaec2-api_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen nova-api | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen nova-api_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen nova-metadata | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen nova-metadata_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen cinder-api_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | |||
+ | listen neutron-server | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen neutron-server_public | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen rabbitmq-server | ||
+ | bind 192.168.60.40: | ||
+ | balance roundrobin | ||
+ | mode tcp | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen epmd | ||
+ | bind 192.168.60.40: | ||
+ | balance roundrobin | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen memcached_cluster | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | </ | ||
+ | Check the syntax of the file you've just modifed: | ||
+ | <code bash> | ||
+ | [root@ha-proxy-04 haproxy]# haproxy -c -f / | ||
+ | Configuration file is valid | ||
+ | </ | ||
+ | Restart HAProxy: | ||
+ | <code bash> | ||
+ | service haproxy restart | ||
+ | </ | ||
+ | ===== Create database' | ||
+ | Login into the MySQL node. | ||
+ | |||
+ | Remove previously created users and databases, if any: | ||
+ | <code bash> | ||
+ | mysql -u root | ||
+ | drop database if exists keystone; | ||
+ | drop database if exists glance; | ||
+ | drop database if exists nova; | ||
+ | drop database if exists neutron; | ||
+ | drop database if exists cinder; | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | drop user ' | ||
+ | flush privileges; | ||
+ | quit | ||
+ | </ | ||
+ | Create database and grant users: | ||
+ | <code bash> | ||
+ | mysql -u root | ||
+ | CREATE DATABASE keystone; | ||
+ | GRANT ALL ON keystone.* TO ' | ||
+ | GRANT ALL ON keystone.* TO ' | ||
+ | CREATE DATABASE glance; | ||
+ | GRANT ALL ON glance.* TO ' | ||
+ | GRANT ALL ON glance.* TO ' | ||
+ | CREATE DATABASE nova; | ||
+ | GRANT ALL ON nova.* TO ' | ||
+ | GRANT ALL ON nova.* TO ' | ||
+ | CREATE DATABASE neutron; | ||
+ | GRANT ALL ON neutron.* TO ' | ||
+ | GRANT ALL ON neutron.* TO ' | ||
+ | CREATE DATABASE cinder; | ||
+ | GRANT ALL ON cinder.* TO ' | ||
+ | GRANT ALL ON cinder.* TO ' | ||
+ | FLUSH PRIVILEGES; | ||
+ | commit; | ||
+ | quit | ||
+ | </ | ||
+ | Logout from MySQL node. | ||
+ | ===== Naming conventions and networking assumptions ===== | ||
+ | We assume that the controller nodes have the following setup: | ||
+ | * They have 2 network interfaces connected to two different networks: **management network**, **public network** | ||
+ | * **Management network** is: '' | ||
+ | * **Public network** (also called " | ||
+ | * First node is named: '' | ||
+ | * Second node is named: '' | ||
+ | |||
+ | ===== Install OpenStack software on both nodes ===== | ||
+ | First install the YUM repo from RDO: | ||
+ | <code bash> | ||
+ | yum install -y http:// | ||
+ | </ | ||
+ | When the support to Havana is decomissioned, | ||
+ | <code bash> | ||
+ | yum -y install https:// | ||
+ | sed -i ' | ||
+ | </ | ||
+ | Install the packages for Keystone, Glance, Nova, Neutron and Horizon(Dashboard): | ||
+ | <code bash> | ||
+ | yum install -y openstack-keystone python-keystoneclient openstack-utils \ | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | ===== Configure Keystone on primary node ===== | ||
+ | Apply a workaround to a known bug (see this [[http:// | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | Proceed with Keystone setup: | ||
+ | <code bash> | ||
+ | export SERVICE_TOKEN=$(openssl rand -hex 10) | ||
+ | echo $SERVICE_TOKEN > ~/ | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | keystone-manage pki_setup --keystone-user keystone --keystone-group keystone | ||
+ | chown -R keystone: | ||
+ | su keystone -s /bin/sh -c " | ||
+ | </ | ||
+ | |||
+ | With a recent update to havana-9, the last command ('' | ||
+ | <code bash> | ||
+ | 2014-07-23 08: | ||
+ | </ | ||
+ | Just re-execute the command once again. | ||
+ | |||
+ | Start Keystone: | ||
+ | |||
+ | <code bash> | ||
+ | service openstack-keystone start | ||
+ | chkconfig openstack-keystone on | ||
+ | </ | ||
+ | |||
+ | Get access to Keystone and create the admin user and tenant: | ||
+ | <code bash> | ||
+ | export SERVICE_TOKEN=`cat ~/ | ||
+ | export SERVICE_ENDPOINT=http:// | ||
+ | keystone service-create --name=keystone --type=identity --description=" | ||
+ | keystone endpoint-create --service keystone --publicurl http:// | ||
+ | keystone user-create --name admin --pass ADMIN_PASS | ||
+ | keystone role-create --name admin | ||
+ | keystone tenant-create --name admin | ||
+ | keystone role-create --name Member | ||
+ | keystone user-role-add --user admin --role admin --tenant admin | ||
+ | \rm -f $HOME/ | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | keystone tenant-create --name services --description " | ||
+ | </ | ||
+ | ==== Check it ==== | ||
+ | In order to check that the Keystone service is well installed, copy the '' | ||
+ | <code bash> | ||
+ | $ keystone user-list | ||
+ | +----------------------------------+-------+---------+-------+ | ||
+ | | id | name | enabled | email | | ||
+ | +----------------------------------+-------+---------+-------+ | ||
+ | | c91e623581374e7397c30a85f7a3e462 | admin | | ||
+ | +----------------------------------+-------+---------+-------+ | ||
+ | </ | ||
+ | |||
+ | ==== Setup recurring token flush ==== | ||
+ | It's better to do this on both controller nodes. | ||
+ | |||
+ | See origin of the problem [[http:// | ||
+ | |||
+ | Create the file ''/ | ||
+ | <code bash> | ||
+ | cat << EOF >> / | ||
+ | #!/bin/sh | ||
+ | logger -t keystone-cleaner " | ||
+ | / | ||
+ | logger -t keystone-cleaner " | ||
+ | EOF | ||
+ | </ | ||
+ | Create the file ''/ | ||
+ | <code bash> | ||
+ | cat << EOF >> / | ||
+ | compress | ||
+ | |||
+ | / | ||
+ | weekly | ||
+ | rotate 4 | ||
+ | missingok | ||
+ | compress | ||
+ | minsize 100k | ||
+ | } | ||
+ | EOF | ||
+ | </ | ||
+ | Execute: | ||
+ | <code bash> | ||
+ | cat << EOF > / | ||
+ | 0 7 * * * root / | ||
+ | EOF | ||
+ | |||
+ | chmod +x / | ||
+ | chmod 0644 / | ||
+ | </ | ||
+ | ===== Configure RabbitMQ message broker on primary node ===== | ||
+ | Define the TCP port range allowed for inter-node communication (this is needed for cluster mode of RabbitMQ) | ||
+ | <code bash> | ||
+ | \rm -f / | ||
+ | cat << EOF >> / | ||
+ | [{kernel, [ {inet_dist_listen_min, | ||
+ | EOF | ||
+ | </ | ||
+ | Start and enable Rabbit | ||
+ | <code bash> | ||
+ | service rabbitmq-server start | ||
+ | chkconfig rabbitmq-server on | ||
+ | </ | ||
+ | ==== Check it ==== | ||
+ | You should see an output like this in the file ''/ | ||
+ | <code bash> | ||
+ | RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. | ||
+ | ## ## Licensed under the MPL. See http:// | ||
+ | ## ## | ||
+ | ########## | ||
+ | ###### | ||
+ | ########## | ||
+ | Starting broker... completed with 0 plugins. | ||
+ | </ | ||
+ | |||
+ | ===== Configure Glance on primary node ===== | ||
+ | Login into the primary controller node, or wherever you've installed the Keystone' | ||
+ | <code bash> | ||
+ | source keystone_admin.sh | ||
+ | </ | ||
+ | Then create the Glance user and image service in the Keystone' | ||
+ | <code bash> | ||
+ | keystone user-create --name glance --pass GLANCE_PASS | ||
+ | keystone user-role-add --user glance --role admin --tenant services | ||
+ | keystone service-create --name glance --type image --description " | ||
+ | keystone endpoint-create --service glance --publicurl " | ||
+ | </ | ||
+ | Login into the primary controller node, modify the relevant configuration files: | ||
+ | |||
+ | **glance-api.conf** | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | # TO CHANGE IN FUTURE (IceHouse) when they' | ||
+ | openstack-config --set / | ||
+ | # The following parameter should equals the CPU number | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | **glance-registry.conf** | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | While still logged into the primary controller node, prepare the paths: | ||
+ | <code bash> | ||
+ | mkdir -p / | ||
+ | chown -R glance / | ||
+ | chown -R glance / | ||
+ | chown -R glance: | ||
+ | </ | ||
+ | ... and initialize the Glance' | ||
+ | <code bash> | ||
+ | su glance -s /bin/sh -c " | ||
+ | </ | ||
+ | If you get an error like this: | ||
+ | <code bash> | ||
+ | 2013-12-20 09: | ||
+ | 2013-12-20 09: | ||
+ | 2013-12-20 09: | ||
+ | </ | ||
+ | just execute the same command once again. | ||
+ | |||
+ | To prevent unprivileged users to register public image, add the following policy in ''/ | ||
+ | <code bash> | ||
+ | " | ||
+ | </ | ||
+ | |||
+ | Always sitting on the primary controller node, start and enable the Glance services: | ||
+ | <code bash> | ||
+ | service openstack-glance-registry start | ||
+ | service openstack-glance-api start | ||
+ | chkconfig openstack-glance-registry on | ||
+ | chkconfig openstack-glance-api on | ||
+ | </ | ||
+ | ... and finally create the credential file for glance | ||
+ | <code bash> | ||
+ | cat << EOF > glancerc | ||
+ | export OS_USERNAME=glance | ||
+ | export OS_TENANT_NAME=services | ||
+ | export OS_PASSWORD=GLANCE_PASS | ||
+ | export OS_AUTH_URL=http:// | ||
+ | EOF | ||
+ | </ | ||
+ | You can copy the credential file to any machine you like, where you've installed the Python Glance' | ||
+ | |||
+ | === Check it === | ||
+ | In order to check that Glance is correctly installed, login into any machines where you've installed the Glance' | ||
+ | <code bash> | ||
+ | [root@lxadorigo ~]# wget http:// | ||
+ | [...] | ||
+ | Saving to: “cirros-0.3.1-x86_64-disk.img” | ||
+ | [...] | ||
+ | 2013-12-06 12:25:03 (3.41 MB/s) - “cirros-0.3.1-x86_64-disk.img” saved [13147648/ | ||
+ | |||
+ | [root@lxadorigo ~]# glance image-create --name=cirros --disk-format=qcow2 --container-format=bare --is-public=True < cirros-0.3.1-x86_64-disk.img | ||
+ | +------------------+--------------------------------------+ | ||
+ | | Property | ||
+ | +------------------+--------------------------------------+ | ||
+ | | checksum | ||
+ | | container_format | bare | | ||
+ | | created_at | ||
+ | | deleted | ||
+ | | deleted_at | ||
+ | | disk_format | ||
+ | | id | 7cc84fd0-fa20-485f-86e9-c0d4015bacd5 | | ||
+ | | is_public | ||
+ | | min_disk | ||
+ | | min_ram | ||
+ | | name | cirros | ||
+ | | owner | 4d7df634c2a7445c975c4fabcaced0e0 | ||
+ | | protected | ||
+ | | size | 13147648 | ||
+ | | status | ||
+ | | updated_at | ||
+ | +------------------+--------------------------------------+ | ||
+ | [root@lxadorigo ~]# glance index | ||
+ | ID | ||
+ | ------------------------------------ ------------------------------ -------------------- -------------------- -------------- | ||
+ | 7cc84fd0-fa20-485f-86e9-c0d4015bacd5 cirros | ||
+ | </ | ||
+ | === Setup Glance cache' | ||
+ | Glance uses much space to cache images. This cache needs to be periodically cleaned to avoid running out of disk space. | ||
+ | In both controller nodes, edit the file ''/ | ||
+ | <code bash> | ||
+ | cat << EOF > / | ||
+ | 0 8 * * * root / | ||
+ | EOF | ||
+ | |||
+ | </ | ||
+ | Then create the cleaner script' | ||
+ | <code bash> | ||
+ | cat << EOF > / | ||
+ | compress | ||
+ | |||
+ | / | ||
+ | weekly | ||
+ | rotate 4 | ||
+ | missingok | ||
+ | compress | ||
+ | minsize 100k | ||
+ | } | ||
+ | EOF | ||
+ | |||
+ | </ | ||
+ | ===== Configure Nova on primary node ===== | ||
+ | Login into the primary controller node, or wherever you've installed the Keystone' | ||
+ | <code bash> | ||
+ | source keystone_admin.sh | ||
+ | </ | ||
+ | Add NOVA service, user and endpoint to Keystone' | ||
+ | <code bash> | ||
+ | keystone user-create --name nova --pass NOVA_PASS | ||
+ | keystone user-role-add --user nova --role admin --tenant services | ||
+ | keystone service-create --name nova --type compute --description " | ||
+ | SERVICE_NOVA_ID=`keystone service-list|grep nova|awk ' | ||
+ | keystone endpoint-create --service-id $SERVICE_NOVA_ID \ | ||
+ | | ||
+ | | ||
+ | | ||
+ | |||
+ | keystone service-create --name nova_ec2 --type ec2 --description "EC2 Service" | ||
+ | SERVICE_EC2_ID=`keystone service-list|grep nova_ec2|awk ' | ||
+ | keystone endpoint-create --service-id $SERVICE_EC2_ID \ | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | Login into the primary controller node and modify the relevant configuration files: | ||
+ | |||
+ | **nova.conf: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | database connection " | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu | ||
+ | openstack-config --set / | ||
+ | DEFAULT rabbit_hosts 192.168.60.41: | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | |||
+ | </ | ||
+ | **api-paste.ini: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | filter: | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | While still logged into the primary controller node, initialize the database: | ||
+ | <code bash> | ||
+ | su nova -s /bin/sh -c " | ||
+ | </ | ||
+ | Modify the file ''/ | ||
+ | <code bash> | ||
+ | # diff -c / | ||
+ | *** / | ||
+ | --- / | ||
+ | *************** | ||
+ | *** 1,8 **** | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | ! " | ||
+ | ! " | ||
+ | |||
+ | " | ||
+ | |||
+ | --- 1,7 ---- | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | ! " | ||
+ | |||
+ | " | ||
+ | |||
+ | *************** | ||
+ | *** 10,16 **** | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | - " | ||
+ | " | ||
+ | " | ||
+ | " | ||
+ | --- 9,14 ---- | ||
+ | </ | ||
+ | ... and turn ON and enable the nova services: | ||
+ | <code bash> | ||
+ | service openstack-nova-api start | ||
+ | service openstack-nova-cert start | ||
+ | service openstack-nova-consoleauth start | ||
+ | service openstack-nova-scheduler start | ||
+ | service openstack-nova-conductor start | ||
+ | service openstack-nova-novncproxy start | ||
+ | |||
+ | chkconfig openstack-nova-api on | ||
+ | chkconfig openstack-nova-cert on | ||
+ | chkconfig openstack-nova-consoleauth on | ||
+ | chkconfig openstack-nova-scheduler on | ||
+ | chkconfig openstack-nova-conductor on | ||
+ | chkconfig openstack-nova-novncproxy on | ||
+ | </ | ||
+ | |||
+ | ==== Check it ==== | ||
+ | From your desktop, or wherever you've copied the '' | ||
+ | <code bash> | ||
+ | bash-4.1$ nova service-list | ||
+ | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | ||
+ | | Binary | ||
+ | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | ||
+ | | nova-consoleauth | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-02-17T14: | ||
+ | | nova-conductor | ||
+ | | nova-scheduler | ||
+ | | nova-cert | ||
+ | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | ||
+ | |||
+ | bash-4.1$ nova availability-zone-list | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | | Name | Status | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | | internal | ||
+ | | |- controller-01.cloud.pd.infn.it | | | ||
+ | | | |- nova-conductor | ||
+ | | | |- nova-consoleauth | ||
+ | | | |- nova-scheduler | ||
+ | | | |- nova-cert | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | |||
+ | bash-4.1$ nova endpoints | ||
+ | +-------------+----------------------------------+ | ||
+ | | glance | ||
+ | +-------------+----------------------------------+ | ||
+ | | adminURL | ||
+ | | region | ||
+ | | publicURL | ||
+ | | internalURL | http:// | ||
+ | | id | 62364ed9384d4231b09841901d415e5a | | ||
+ | +-------------+----------------------------------+ | ||
+ | +-------------+---------------------------------------------------------------+ | ||
+ | | nova | Value | | ||
+ | +-------------+---------------------------------------------------------------+ | ||
+ | | adminURL | ||
+ | | region | ||
+ | | id | 190cb5922b2f4ede868a328003422322 | ||
+ | | serviceName | nova | | ||
+ | | internalURL | http:// | ||
+ | | publicURL | ||
+ | +-------------+---------------------------------------------------------------+ | ||
+ | +-------------+----------------------------------+ | ||
+ | | keystone | ||
+ | +-------------+----------------------------------+ | ||
+ | | adminURL | ||
+ | | region | ||
+ | | publicURL | ||
+ | | internalURL | http:// | ||
+ | | id | 511771b79f3946c5901c48d72e9a324c | | ||
+ | +-------------+----------------------------------+ | ||
+ | </ | ||
+ | Even better if the above commands can be tried from your desktop, after sourc-ing the '' | ||
+ | ==== Create nova user's keypair and distribute them to other nodes ==== | ||
+ | <code bash> | ||
+ | usermod -s /bin/bash nova | ||
+ | mkdir -p -m 700 ~nova/.ssh | ||
+ | chown nova.nova ~nova/.ssh | ||
+ | su - nova | ||
+ | cd .ssh | ||
+ | ssh-keygen -f id_rsa -b 1024 -P "" | ||
+ | cp id_rsa.pub authorized_keys | ||
+ | |||
+ | cat << EOF >> config | ||
+ | Host * | ||
+ | | ||
+ | | ||
+ | EOF | ||
+ | </ | ||
+ | Distribute the content of '' | ||
+ | ===== Configure Neutron on primary node ===== | ||
+ | Login into the primary controller node, or wherever you've installed the Keystone' | ||
+ | <code bash> | ||
+ | source ~/ | ||
+ | </ | ||
+ | Then, create the endpoint, service and user information in the Keystone' | ||
+ | <code bash> | ||
+ | keystone user-create --name neutron --pass NEUTRON_PASS | ||
+ | keystone user-role-add --user neutron --role admin --tenant services | ||
+ | keystone service-create --name neutron --type network --description " | ||
+ | SERVICE_NEUTRON_ID=`keystone service-list|grep neutron|awk ' | ||
+ | keystone endpoint-create --service-id $SERVICE_NEUTRON_ID \ | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | Login into the primary controller node and modify the configuration files. | ||
+ | |||
+ | **neutron.conf: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | keystone_authtoken auth_url http:// | ||
+ | openstack-config --set / | ||
+ | keystone_authtoken auth_uri http:// | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu | ||
+ | openstack-config --set / | ||
+ | DEFAULT rabbit_hosts 192.168.60.41: | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 | ||
+ | openstack-config --set / | ||
+ | agent root_helper "sudo neutron-rootwrap / | ||
+ | openstack-config --set / | ||
+ | database connection " | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | **api-paste.ini: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | filter: | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | **ovs_neutron_plugin.ini: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver | ||
+ | </ | ||
+ | **nova.conf: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | While still logged into the primary controller node, configure the OVS plugin. | ||
+ | <code bash> | ||
+ | cd / | ||
+ | ln -s plugins/ | ||
+ | cd - | ||
+ | </ | ||
+ | ... and restart NOVA's services (as you've just modified its configuration file) | ||
+ | <code bash> | ||
+ | service openstack-nova-api restart | ||
+ | service openstack-nova-scheduler restart | ||
+ | service openstack-nova-conductor restart | ||
+ | </ | ||
+ | While still logged into the primary controller node, start and enable the Neutron server: | ||
+ | <code bash> | ||
+ | neutron-db-manage --config-file / | ||
+ | </ | ||
+ | It's output should be like: | ||
+ | <code bash> | ||
+ | No handlers could be found for logger " | ||
+ | INFO [alembic.migration] Context impl MySQLImpl. | ||
+ | INFO [alembic.migration] Will assume non-transactional DDL. | ||
+ | </ | ||
+ | Now start '' | ||
+ | <code bash> | ||
+ | service neutron-server start | ||
+ | chkconfig neutron-server on | ||
+ | |||
+ | </ | ||
+ | ===== Configure Cinder on primary node ===== | ||
+ | |||
+ | Login into the primary controller node, or wherever you've installed the Keystone' | ||
+ | |||
+ | <code bash> | ||
+ | source ~/ | ||
+ | </ | ||
+ | |||
+ | Then, create the endpoint, service and user information in the Keystone' | ||
+ | |||
+ | <code bash> | ||
+ | keystone user-create --name cinder --pass CINDER_PASS | ||
+ | keystone user-role-add --user cinder --role admin --tenant services | ||
+ | keystone service-create --name cinder --type volume --description " | ||
+ | keystone service-create --name=cinderv2 --type=volumev2 --description=" | ||
+ | |||
+ | keystone endpoint-create --service cinder | ||
+ | keystone endpoint-create --service cinderv2 --publicurl http:// | ||
+ | </ | ||
+ | Login into the primary controller node and modify the configuration files. | ||
+ | |||
+ | **cinder.conf**: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | # | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | Initialize the Cinder database: | ||
+ | <code bash> | ||
+ | su cinder -s /bin/sh -c " | ||
+ | </ | ||
+ | And finally start API services: | ||
+ | <code bash> | ||
+ | service openstack-cinder-api start | ||
+ | chkconfig openstack-cinder-api on | ||
+ | service openstack-cinder-scheduler start | ||
+ | chkconfig openstack-cinder-scheduler on | ||
+ | </ | ||
+ | |||
+ | |||
+ | ===== Configure Horizon (Dashboard) on primary node ===== | ||
+ | Modify the file ''/ | ||
+ | <code bash> | ||
+ | CACHES = { | ||
+ | ' | ||
+ | ' | ||
+ | ' | ||
+ | } | ||
+ | } | ||
+ | </ | ||
+ | you can try this command: | ||
+ | <code bash> | ||
+ | sed -i " | ||
+ | </ | ||
+ | Note that the TCP port 11211 and IP address must match those ones contained in the file ''/ | ||
+ | <code bash> | ||
+ | PORT=" | ||
+ | USER=" | ||
+ | MAXCONN=" | ||
+ | CACHESIZE=" | ||
+ | OPTIONS=" | ||
+ | </ | ||
+ | |||
+ | Now, look for the string '' | ||
+ | <code bash> | ||
+ | OPENSTACK_HOST = " | ||
+ | </ | ||
+ | by executing this command: | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | </ | ||
+ | |||
+ | Modify the '' | ||
+ | <code bash> | ||
+ | ALLOWED_HOSTS = [' | ||
+ | </ | ||
+ | by executing the command | ||
+ | <code bash> | ||
+ | sed -i " | ||
+ | </ | ||
+ | Execute the following commands: | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | </ | ||
+ | |||
+ | To address an observed problem related to number of open files execute the following command on both controller nodes: | ||
+ | <code bash> | ||
+ | cat << EOF >> / | ||
+ | * soft nofile | ||
+ | * hard nofile | ||
+ | EOF | ||
+ | |||
+ | </ | ||
+ | Start and enable the WebServer: | ||
+ | <code bash> | ||
+ | service httpd start | ||
+ | service memcached start | ||
+ | chkconfig httpd on | ||
+ | chkconfig memcached on | ||
+ | </ | ||
+ | ==== Configure Dashboard for SSL sessions ==== | ||
+ | **Please, do not consider this configuration as optional. It should be done in order to crypt the users' passwords.** | ||
+ | |||
+ | Install the '' | ||
+ | <code bash> | ||
+ | yum -y install mod_ssl | ||
+ | </ | ||
+ | Execute the following commands: | ||
+ | <code bash> | ||
+ | #sed -i ' | ||
+ | #sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | </ | ||
+ | |||
+ | Restart httpd: | ||
+ | <code bash> | ||
+ | service httpd restart | ||
+ | </ | ||
+ | ===== ===== | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
+ | |||
+ | **You can stop here if you don't need the High Availability with the second node neither the SSL support.** | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
+ | ===== Configure and " | ||
+ | Login into the secondary controller node and configure the RabbitMQ to use the already specified TCP port range: | ||
+ | <code bash> | ||
+ | \rm -f / | ||
+ | cat << EOF >> / | ||
+ | [{kernel, [ {inet_dist_listen_min, | ||
+ | EOF | ||
+ | </ | ||
+ | While still logged into the secondary controller node, start and enable Rabbit: | ||
+ | <code bash> | ||
+ | service rabbitmq-server start | ||
+ | chkconfig rabbitmq-server on | ||
+ | </ | ||
+ | This first start has generated the erlang cookie. Then stop the server: | ||
+ | <code bash> | ||
+ | service rabbitmq-server stop | ||
+ | </ | ||
+ | RabbitMQ' | ||
+ | <code bash> | ||
+ | scp root@controller-01.cloud.pd.infn.it:/ | ||
+ | </ | ||
+ | Change cookie' | ||
+ | <code bash> | ||
+ | chown rabbitmq: | ||
+ | service rabbitmq-server start | ||
+ | |||
+ | </ | ||
+ | While logged into the secondary controller node, stop the application: | ||
+ | <code bash> | ||
+ | rabbitmqctl stop_app | ||
+ | rabbitmqctl reset | ||
+ | </ | ||
+ | ... then join the server running in the primary node: | ||
+ | <code bash> | ||
+ | rabbitmqctl join_cluster rabbit@controller-01 | ||
+ | Clustering node ' | ||
+ | ...done. | ||
+ | |||
+ | rabbitmqctl start_app | ||
+ | Starting node ' | ||
+ | ...done. | ||
+ | |||
+ | # see: http:// | ||
+ | rabbitmqctl set_policy HA ' | ||
+ | </ | ||
+ | === Check it === | ||
+ | <code bash> | ||
+ | [root@controller-02 ~]# rabbitmqctl cluster_status | ||
+ | Cluster status of node ' | ||
+ | [{nodes, | ||
+ | | ||
+ | | ||
+ | ...done. | ||
+ | |||
+ | [root@controller-02 ~]# rabbitmqctl list_policies | ||
+ | Listing policies ... | ||
+ | / | ||
+ | ...done. | ||
+ | |||
+ | [root@controller-01 ~]# rabbitmqctl cluster_status | ||
+ | Cluster status of node ' | ||
+ | [{nodes, | ||
+ | | ||
+ | | ||
+ | ...done. | ||
+ | |||
+ | [root@controller-02 ~]# rabbitmqctl list_policies | ||
+ | Listing policies ... | ||
+ | / | ||
+ | ...done. | ||
+ | |||
+ | </ | ||
+ | ===== Configure services on secondary node ===== | ||
+ | Login into the secondary controller node; copy Keystone, Glance, Nova, Neutron, Cinder and Horizon' | ||
+ | <code bash> | ||
+ | scp controller-01.cloud.pd.infn.it:/ | ||
+ | scp -r controller-01.cloud.pd.infn.it:/ | ||
+ | scp -r controller-01.cloud.pd.infn.it:/ | ||
+ | scp -r controller-01.cloud.pd.infn.it:/ | ||
+ | scp -r controller-01.cloud.pd.infn.it:/ | ||
+ | scp -r controller-01.cloud.pd.infn.it:/ | ||
+ | scp controller-01.cloud.pd.infn.it:/ | ||
+ | \rm -f / | ||
+ | ln -s / | ||
+ | </ | ||
+ | While still logged into the secondary controller node, finalize the setup: | ||
+ | <code bash> | ||
+ | keystone-manage pki_setup --keystone-user keystone --keystone-group keystone | ||
+ | mkdir -p / | ||
+ | mkdir -p / | ||
+ | chown -R glance: | ||
+ | chown -R keystone: | ||
+ | chown -R neutron: | ||
+ | </ | ||
+ | |||
+ | ... Dashboard' | ||
+ | |||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | </ | ||
+ | |||
+ | … HTTPS for Dashboard: | ||
+ | <code bash> | ||
+ | #sed -i ' | ||
+ | #sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | echo " | ||
+ | echo " | ||
+ | echo " | ||
+ | </ | ||
+ | |||
+ | ... increase the number of allowed open files: | ||
+ | |||
+ | <code bash> | ||
+ | cat << EOF >> / | ||
+ | * soft nofile | ||
+ | * hard nofile | ||
+ | EOF | ||
+ | |||
+ | </ | ||
+ | |||
+ | ... change the memcached' | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | </ | ||
+ | |||
+ | ... change the location of the memcached service in the dashboard' | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | </ | ||
+ | |||
+ | ... and finally turn all services ON, and enable them: | ||
+ | |||
+ | <code bash> | ||
+ | service openstack-keystone start | ||
+ | service openstack-glance-registry start | ||
+ | service openstack-glance-api start | ||
+ | service openstack-nova-api start | ||
+ | service openstack-nova-cert start | ||
+ | service openstack-nova-consoleauth start | ||
+ | service openstack-nova-scheduler start | ||
+ | service openstack-nova-conductor start | ||
+ | service openstack-nova-novncproxy start | ||
+ | service neutron-server start | ||
+ | service httpd start | ||
+ | service memcached start | ||
+ | service openstack-cinder-api start | ||
+ | service openstack-cinder-scheduler start | ||
+ | |||
+ | chkconfig openstack-keystone on | ||
+ | chkconfig openstack-glance-registry on | ||
+ | chkconfig openstack-glance-api on | ||
+ | chkconfig openstack-nova-api on | ||
+ | chkconfig openstack-nova-cert on | ||
+ | chkconfig openstack-nova-consoleauth on | ||
+ | chkconfig openstack-nova-scheduler on | ||
+ | chkconfig openstack-nova-conductor on | ||
+ | chkconfig openstack-nova-novncproxy on | ||
+ | chkconfig neutron-server on | ||
+ | chkconfig httpd on | ||
+ | chkconfig memcached on | ||
+ | chkconfig openstack-cinder-api on | ||
+ | chkconfig openstack-cinder-scheduler on | ||
+ | </ | ||
+ | ==== Check it ==== | ||
+ | On your desktop, source the file '' | ||
+ | <code bash> | ||
+ | bash-4.1$ nova availability-zone-list | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | | Name | Status | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | | internal | ||
+ | | |- controller-02.cloud.pd.infn.it | | | ||
+ | | | |- nova-conductor | ||
+ | | | |- nova-consoleauth | ||
+ | | | |- nova-scheduler | ||
+ | | | |- nova-cert | ||
+ | | |- controller-01.cloud.pd.infn.it | | | ||
+ | | | |- nova-conductor | ||
+ | | | |- nova-consoleauth | ||
+ | | | |- nova-scheduler | ||
+ | | | |- nova-cert | ||
+ | +-----------------------------------+----------------------------------------+ | ||
+ | |||
+ | bash-4.1$ cinder service-list | ||
+ | +------------------+--------------------------------+------+---------+-------+----------------------------+ | ||
+ | | Binary | ||
+ | +------------------+--------------------------------+------+---------+-------+----------------------------+ | ||
+ | | cinder-scheduler | controller-01.cloud.pd.infn.it | nova | enabled | | ||
+ | | cinder-scheduler | controller-02.cloud.pd.infn.it | nova | enabled | | ||
+ | +------------------+--------------------------------+------+---------+-------+----------------------------+ | ||
+ | </ | ||
+ | |||
+ | ===== Optional: SSL configuration & INFN-AAI ===== | ||
+ | |||
+ | First of all, on both controller nodes, switch off all the OpenStack' | ||
+ | <code bash> | ||
+ | service openstack-glance-registry stop | ||
+ | service openstack-glance-api stop | ||
+ | service openstack-nova-api stop | ||
+ | service openstack-nova-cert stop | ||
+ | service openstack-nova-consoleauth stop | ||
+ | service openstack-nova-scheduler stop | ||
+ | service openstack-nova-conductor stop | ||
+ | service openstack-nova-novncproxy stop | ||
+ | service neutron-server stop | ||
+ | service httpd stop | ||
+ | service openstack-cinder-api stop | ||
+ | service openstack-cinder-scheduler stop | ||
+ | </ | ||
+ | ==== Configure HAProxy to act like an SSL terminator ==== | ||
+ | Before proceed note that | ||
+ | * **HAProxy 1.5.x** is required to support an SSL frontend. | ||
+ | * The '' | ||
+ | |||
+ | To upgrade to HAProxy 1.5.x in a SL6/CentOS6 execute the following commands: | ||
+ | <code bash> | ||
+ | wget --no-check-certificate --user=cloud_pd --ask-password https:// | ||
+ | mv haproxy.repo / | ||
+ | yum clean all | ||
+ | yum -y update haproxy | ||
+ | </ | ||
+ | |||
+ | Modify '' | ||
+ | <code bash> | ||
+ | listen dashboard_public_ssl | ||
+ | bind 90.147.77.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server controller-01.cloud.pd.infn.it 192.168.60.41: | ||
+ | server controller-02.cloud.pd.infn.it 192.168.60.44: | ||
+ | |||
+ | listen nova_metadata_server | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen glance_registry | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen rabbitmq-server | ||
+ | bind 192.168.60.40: | ||
+ | balance roundrobin | ||
+ | mode tcp | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen epmd | ||
+ | bind 192.168.60.40: | ||
+ | balance roundrobin | ||
+ | server | ||
+ | server | ||
+ | |||
+ | listen nova_memcached_cluster | ||
+ | bind 192.168.60.40: | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | frontend keystone-admin_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend keystone-admin | ||
+ | |||
+ | frontend keystone-public_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend keystone-public | ||
+ | |||
+ | frontend glanceapi | ||
+ | bind 192.168.60.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend glanceapi | ||
+ | |||
+ | frontend glanceapi_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend glanceapi | ||
+ | |||
+ | frontend novaapi | ||
+ | bind 192.168.60.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend novaapi | ||
+ | |||
+ | frontend novaapi_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend novaapi | ||
+ | |||
+ | frontend cinder | ||
+ | bind 192.168.60.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend cinderapi | ||
+ | |||
+ | frontend cinder_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend cinderapi | ||
+ | |||
+ | | ||
+ | frontend neutron | ||
+ | bind 192.168.60.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend neutronapi | ||
+ | |||
+ | frontend neutron_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend neutronapi | ||
+ | | ||
+ | frontend novnc_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend novnc | ||
+ | |||
+ | frontend ec2_api | ||
+ | bind 192.168.60.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend ec2api | ||
+ | |||
+ | frontend ec2_api_pub | ||
+ | bind 90.147.77.40: | ||
+ | mode http | ||
+ | option httpclose | ||
+ | option forwardfor | ||
+ | reqadd X-Forwarded-Proto: | ||
+ | default_backend ec2api | ||
+ | | ||
+ | backend keystone-admin | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend keystone-public | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend glanceapi | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend novaapi | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend ec2api | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend cinderapi | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend neutronapi | ||
+ | mode http | ||
+ | balance source | ||
+ | option | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend novnc | ||
+ | mode http | ||
+ | balance source | ||
+ | server | ||
+ | server | ||
+ | |||
+ | backend dashboard | ||
+ | mode http | ||
+ | balance source | ||
+ | server | ||
+ | server | ||
+ | |||
+ | </ | ||
+ | |||
+ | ... and restart the HAProxy daemon. | ||
+ | |||
+ | Now login into one of the two controller nodes, and do a precautionary unset all the OS_* variables: | ||
+ | <code bash> | ||
+ | unset OS_USERNAME | ||
+ | unset OS_TENANT_NAME | ||
+ | unset OS_PASSWORD | ||
+ | unset OS_AUTH_URL | ||
+ | </ | ||
+ | To get back access to Keystone issue the following commands: | ||
+ | <code bash> | ||
+ | export SERVICE_TOKEN=`cat ~/ | ||
+ | export SERVICE_ENDPOINT=http:// | ||
+ | </ | ||
+ | * Note 1: 192.168.60.41 is the IP address of the controller node you're logged into. | ||
+ | * Note 2: the file '' | ||
+ | |||
+ | Change Keystone' | ||
+ | <code bash> | ||
+ | KEYSTONE_SERVICE=$(keystone service-get keystone | grep ' id ' | awk ' | ||
+ | KEYSTONE_ENDPOINT=$(keystone endpoint-list | grep $KEYSTONE_SERVICE|awk ' | ||
+ | keystone endpoint-delete $KEYSTONE_ENDPOINT | ||
+ | |||
+ | keystone endpoint-create --region regionOne --service-id $KEYSTONE_SERVICE --publicurl " | ||
+ | </ | ||
+ | Note: no need to restart '' | ||
+ | |||
+ | Change the '' | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | echo " | ||
+ | echo " | ||
+ | </ | ||
+ | Note: no need to do anything on the second controller node. The endpoint with the https url has been changed on the MySQL database; all this is transparent for the '' | ||
+ | ==== Check it ==== | ||
+ | <code bash> | ||
+ | unset SERVICE_TOKEN | ||
+ | unset SERVICE_ENDPOINT | ||
+ | source $HOME/ | ||
+ | keystone user-list | ||
+ | </ | ||
+ | ==== Glance ==== | ||
+ | Modify authentication parameters on both controller nodes: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | |||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | |||
+ | Execute this on one controller node only (or where you have the file '' | ||
+ | <code bash> | ||
+ | source ~/ | ||
+ | GLANCE_SERVICE=$(keystone service-get glance | grep ' id ' | awk ' | ||
+ | GLANCE_ENDPOINT=$(keystone endpoint-list | grep $GLANCE_SERVICE|awk ' | ||
+ | keystone endpoint-delete $GLANCE_ENDPOINT | ||
+ | keystone endpoint-create --service glance --publicurl " | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | Restart Glance on both controller nodes: | ||
+ | <code bash> | ||
+ | service openstack-glance-api restart | ||
+ | service openstack-glance-registry restart | ||
+ | </ | ||
+ | |||
+ | ==== Nova ==== | ||
+ | Modify authentication parameters on both controller nodes: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | |||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | On one controller node only (or where you have '' | ||
+ | <code bash> | ||
+ | NOVA_SERVICE=$(keystone service-get nova | grep ' id ' | awk ' | ||
+ | NOVA_ENDPOINT=$(keystone endpoint-list | grep $NOVA_SERVICE|awk ' | ||
+ | keystone endpoint-delete $NOVA_ENDPOINT | ||
+ | keystone endpoint-create --service-id $NOVA_SERVICE --publicurl https:// | ||
+ | |||
+ | NOVAEC2_SERVICE=$(keystone service-get nova_ec2 | grep ' id ' | awk ' | ||
+ | NOVAEC2_ENDPOINT=$(keystone endpoint-list | grep $NOVAEC2_SERVICE|awk ' | ||
+ | keystone endpoint-delete $NOVAEC2_ENDPOINT | ||
+ | keystone endpoint-create --service-id $NOVAEC2_SERVICE --publicurl https:// | ||
+ | |||
+ | </ | ||
+ | |||
+ | Restart Nova on both controller nodes: | ||
+ | <code bash> | ||
+ | service openstack-nova-api restart | ||
+ | service openstack-nova-cert restart | ||
+ | service openstack-nova-consoleauth restart | ||
+ | service openstack-nova-scheduler restart | ||
+ | service openstack-nova-conductor restart | ||
+ | service openstack-nova-novncproxy restart | ||
+ | </ | ||
+ | ==== Neutron ==== | ||
+ | Modify authentication parameters | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | On one controller only (or where you have the '' | ||
+ | <code bash> | ||
+ | NEUTRON_SERVICE=$(keystone service-get neutron | grep ' id ' | awk ' | ||
+ | NEUTRON_ENDPOINT=$(keystone endpoint-list | grep $NEUTRON_SERVICE|awk ' | ||
+ | keystone endpoint-delete $NEUTRON_ENDPOINT | ||
+ | keystone endpoint-create --service-id $NEUTRON_SERVICE --publicurl " | ||
+ | </ | ||
+ | Restart Neutron and Nova on both controller nodes (nova needs to be restarted because its conf file has been changed): | ||
+ | <code bash> | ||
+ | service neutron-server restart | ||
+ | service openstack-nova-api restart | ||
+ | service openstack-nova-cert restart | ||
+ | service openstack-nova-consoleauth restart | ||
+ | service openstack-nova-scheduler restart | ||
+ | service openstack-nova-conductor restart | ||
+ | service openstack-nova-novncproxy restart | ||
+ | </ | ||
+ | ==== Cinder ==== | ||
+ | Modify authentication parameters on both controller nodes: | ||
+ | <code bash> | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | openstack-config --set / | ||
+ | </ | ||
+ | On one controller only (or where you have the '' | ||
+ | <code bash> | ||
+ | CINDER_SERVICE=$(keystone service-get cinder | grep ' id ' | awk ' | ||
+ | CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk ' | ||
+ | keystone endpoint-delete $CINDER_ENDPOINT | ||
+ | CINDER_SERVICE=$(keystone service-get cinderv2 | grep ' id ' | awk ' | ||
+ | CINDER_ENDPOINT=$(keystone endpoint-list | grep $CINDER_SERVICE|awk ' | ||
+ | keystone endpoint-delete $CINDER_ENDPOINT | ||
+ | keystone endpoint-create --service cinder --publicurl https:// | ||
+ | keystone endpoint-create --service cinderv2 --publicurl https:// | ||
+ | </ | ||
+ | Restart Cinder on both controller nodes: | ||
+ | <code bash> | ||
+ | service openstack-cinder-api restart | ||
+ | service openstack-cinder-scheduler restart | ||
+ | </ | ||
+ | ==== Horizon (both controller nodes) ==== | ||
+ | Setup secure connection to Keystone: | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i 's+# OPENSTACK_SSL_CACERT.*+OPENSTACK_SSL_CACERT="/ | ||
+ | </ | ||
+ | Prepare to patch Horizon' | ||
+ | <code bash> | ||
+ | yum install -y patch | ||
+ | curl -o os_auth_patch_01.diff https:// | ||
+ | curl -o os_auth_patch_02.diff https:// | ||
+ | curl -o os_auth_patch_03.diff https:// | ||
+ | patch -R / | ||
+ | patch -R / | ||
+ | patch -R / | ||
+ | |||
+ | </ | ||
+ | Alternatively **download and install Dashboard patched RPM**: | ||
+ | <code bash> | ||
+ | TODO... (waiting for an ' | ||
+ | </ | ||
+ | Restart apache web server: | ||
+ | <code bash> | ||
+ | service httpd restart | ||
+ | |||
+ | </ | ||
+ | |||
+ | ==== Fix metadata agent (on both controller nodes) ==== | ||
+ | To address this [[https:// | ||
+ | <code bash> | ||
+ | curl -o agent.py https:// | ||
+ | mv / | ||
+ | cp agent.py / | ||
+ | |||
+ | service neutron-server restart | ||
+ | </ | ||
+ | |||
+ | ==== Integration of INFN-AAI in Keystone (on both controller nodes) ==== | ||
+ | |||
+ | See [[https:// | ||
+ | |||
+ | === === |