Node installed via Foreman with:
/etc/selinux/config
)
Configure SELINUX=disabled
in /etc/selinux/config
and then reboot.
After reboot, confirm that the getenforce
command returns Disabled
# getenforce
Disabled
Add in /etc/security/limits.conf
the following lines:
* soft nofile 4096 * hard nofile 4096
Put the following lines in the file /etc/sysconfig/iptables
just before the first line where there's a REJECT rule:
# allow traffic toward rabbitmq server -A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT -A INPUT -p tcp -m tcp --dport 4369 -j ACCEPT -A INPUT -p tcp -m tcp --dport 35197 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9100:9110 -j ACCEPT # allow traffic toward keystone -A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT # allow traffic to glance-api -A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT # allow traffic to glance-registry -A INPUT -p tcp -m multiport --dports 9191 -j ACCEPT # allow traffic to NOVA's APIs -A INPUT -p tcp -m multiport --dports 8773 -j ACCEPT -A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT -A INPUT -p tcp -m multiport --dports 8775 -j ACCEPT # allow traffic to NOVA's VNC proxy -A INPUT -p tcp -m multiport --dports 6080 -j ACCEPT # allow traffic to NEUTRON Server -A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT -A INPUT -p tcp -m multiport --dports 9697 -j ACCEPT -A INPUT -p tcp -m multiport --dports 9698 -j ACCEPT # allow traffic to Dashboard -A INPUT -p tcp -m multiport --dports 80 -j ACCEPT -A INPUT -p tcp -m multiport --dports 8443 -j ACCEPT # allow traffic to memcached -A INPUT -p tcp -m multiport --dports 11211 -j ACCEPT # allow traffic to cinder -A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT
Restart IPTables:
service iptables restart
Configure the em3
interface with an address on the public network.
Edit the file /etc/sysconfig/network-scripts/ifcfg-em3
with this content:
DEVICE="em3" BOOTPROTO="static" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" IPADDR="90.147.77.39" NETMASK="255.255.255.0"
Configure the em2
interface with an address on the data network.
Edit the file /etc/sysconfig/network-scripts/ifcfg-em2
with this content:
DEVICE="em2" BOOTPROTO="static" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" IPADDR="192.168.61.111" NETMASK="255.255.255.0"
Add the following line in the file /etc/sysconfig/network
:
GATEWAY=90.147.77.254
Reboot the machine:
shutdown -r now
Login into a MySQL node (e.g. mysql-cluster-01
).
Remove previously created users and databases, if any:
mysql -u root drop database if exists keystone_pp; drop database if exists glance_pp; drop database if exists nova_pp; drop database if exists neutron_pp; drop database if exists cinder_pp; drop user 'keystone_pp'@'localhost'; drop user 'keystone_pp'@'192.168.60.%'; drop user 'glance_pp'@'localhost'; drop user 'glance_pp'@'192.168.60.%'; drop user 'nova_pp'@'localhost'; drop user 'nova_pp'@'192.168.60.%'; drop user 'neutron_pp'@'localhost'; drop user 'neutron_pp'@'192.168.60.%'; drop user 'cinder_pp'@'localhost'; drop user 'cinder_pp'@'192.168.60.%'; flush privileges; quit
Create databases and grant users:
mysql -u root CREATE DATABASE keystone_pp; GRANT ALL ON keystone_pp.* TO 'keystone_pp'@'192.168.60.%' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; GRANT ALL ON keystone_pp.* TO 'keystone_pp'@'localhost' IDENTIFIED BY '<KEYSTONE_DB_PWD>'; CREATE DATABASE glance_pp; GRANT ALL ON glance_pp.* TO 'glance_pp'@'192.168.60.%' IDENTIFIED BY '<GLANCE_DB_PWD>'; GRANT ALL ON glance_pp.* TO 'glance_pp'@'localhost' IDENTIFIED BY '<GLANCE_DB_PWD>'; CREATE DATABASE nova_pp; GRANT ALL ON nova_pp.* TO 'nova_pp'@'192.168.60.%' IDENTIFIED BY '<NOVA_DB_PWD>'; GRANT ALL ON nova_pp.* TO 'nova_pp'@'localhost' IDENTIFIED BY '<NOVA_DB_PWD>'; CREATE DATABASE neutron_pp; GRANT ALL ON neutron_pp.* TO 'neutron_pp'@'192.168.60.%' IDENTIFIED BY '<NEUTRON_DB_PWD>'; GRANT ALL ON neutron_pp.* TO 'neutron_pp'@'localhost' IDENTIFIED BY '<NEUTRON_DB_PWD>'; CREATE DATABASE cinder_pp; GRANT ALL ON cinder_pp.* TO 'cinder_pp'@'192.168.60.%' IDENTIFIED BY '<CINDER_DB_PWD>'; GRANT ALL ON cinder_pp.* TO 'cinder_pp'@'localhost' IDENTIFIED BY '<CINDER_DB_PWD>'; FLUSH PRIVILEGES; commit; quit
Logout from MySQL node.
Install the packages for Keystone, Glance, Nova, Neutron and Horizon(Dashboard):
yum install -y openstack-keystone python-keystoneclient openstack-utils \ openstack-nova python-novaclient rabbitmq-server openstack-glance \ python-kombu python-anyjson python-amqplib openstack-neutron \ python-neutron python-neutronclient openstack-neutron-openvswitch mysql \ memcached python-memcached mod_wsgi openstack-dashboard openstack-cinder \ openstack-utils openstack-selinux
export SERVICE_TOKEN=$(openssl rand -hex 10) echo $SERVICE_TOKEN > ~/ks_admin_token openstack-config --set /etc/keystone/keystone.conf \ DEFAULT admin_token $SERVICE_TOKEN openstack-config --set /etc/keystone/keystone.conf \ sql connection "mysql://keystone_pp:<KEYSTONE_DB_PWD>@192.168.60.10/keystone_pp" openstack-config --set /etc/keystone/keystone.conf \ DEFAULT bind_host 0.0.0.0 openstack-config --set /etc/keystone/keystone.conf token expiration 32400 keystone-manage pki_setup --keystone-user keystone --keystone-group keystone chown -R keystone:keystone /var/log/keystone /etc/keystone/* su keystone -s /bin/sh -c "keystone-manage db_sync" service openstack-keystone start chkconfig openstack-keystone on export SERVICE_TOKEN=`cat ~/ks_admin_token` export SERVICE_ENDPOINT=http://90.147.77.39:35357/v2.0 keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" keystone endpoint-create --service keystone \ --publicurl http://90.147.77.39:5000/v2.0 \ --adminurl http://192.168.60.111:35357/v2.0 \ --internalurl http://192.168.60.111:5000/v2.0 - keystone user-create --name admin --pass ADMIN_PASS keystone role-create --name admin keystone tenant-create --name admin keystone role-create --name Member keystone user-role-add --user admin --role admin --tenant admin \rm -f $HOME/keystone_admin.sh echo "export OS_USERNAME=admin" > $HOME/keystone_admin.sh echo "export OS_TENANT_NAME=admin" >> $HOME/keystone_admin.sh echo "export OS_PASSWORD=ADMIN_PASS" >> $HOME/keystone_admin.sh echo "export OS_AUTH_URL=http://90.147.77.39:35357/v2.0/" >> $HOME/keystone_admin.sh keystone tenant-create --name services --description "Service Tenant"
In order to check that the Keystone service is well installed, copy the keystone_admin.sh
script you've just created to another machine, even your desktop. Install on it the Python Keystone's command line (yum -y install python-keystoneclient
); then source the script keystone_admin.sh
and try the command:
$ keystone user-list +----------------------------------+--------+---------+-------+ | id | name | enabled | email | +----------------------------------+--------+---------+-------+ | 240e36dd84b24cf0b7ca1c94ef3a78e8 | admin | True | | +----------------------------------+--------+---------+-------+ $ keystone token-get +-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ... ... ...
Create the file /usr/local/bin/keystone_token_flush.sh
:
#!/bin/sh logger -t keystone-cleaner "Starting token cleanup" . /root/keystone_admin.sh /usr/bin/keystone-manage -v -d token_flush logger -t keystone-cleaner "Ending token cleanup"
Create the file /etc/logrotate.d/keystone_token_flush
to rotate the log:
compress /var/log/keystone_token_flush.log { weekly rotate 4 missingok compress minsize 100k }
cat << EOF > /etc/cron.d/keystone_token_flush 0 7 * * * root /usr/local/bin/keystone_token_flush.sh >> /var/log/keystone_token_flush.log 2>&1 EOF chmod +x /usr/local/bin/keystone_token_flush.sh chmod 0644 /etc/cron.d/keystone_token_flush
Define the TCP port range allowed for inter-node communication (this is needed for cluster mode of RabbitMQ)
\rm -f /etc/rabbitmq/rabbitmq.config cat << EOF >> /etc/rabbitmq/rabbitmq.config [{kernel, [ {inet_dist_listen_min, 9100}, {inet_dist_listen_max, 9110} ]}]. EOF
Start and enable Rabbit
service rabbitmq-server start
chkconfig rabbitmq-server on
#rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
You should see an output like this in the file /var/log/rabbitmq/startup_log
:
RabbitMQ 3.1.5. Copyright (C) 2007-2013 GoPivotal, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /var/log/rabbitmq/rabbit@openstack1.log ###### ## /var/log/rabbitmq/rabbit@openstack1-sasl.log ########## Starting broker... completed with 0 plugins.
Prepare the storage for Glance images:
mkdir /OpenStack/glance_images chown glance.glance /OpenStack/glance_images cd /var/lib/glance/ ln -fs /OpenStack/glance_images images chown -R glance:glance /var/lib/glance
Source the script keystone_admin.sh
that you created above:
source keystone_admin.sh
Then create the Glance user and image service in the Keystone's database:
keystone user-create --name glance --pass GLANCE_PASS keystone user-role-add --user glance --role admin --tenant services keystone service-create --name glance --type image --description "Glance Image Service" keystone endpoint-create --service glance \ --publicurl "http://90.147.77.39:9292" \ --adminurl "http://192.168.60.111:9292" \ --internalurl "http://192.168.60.111:9292"
Login into the controller node, modify the relevant configuration files:
glance-api.conf
openstack-config --set /etc/glance/glance-api.conf \ DEFAULT sql_connection "mysql://glance_pp:<GLANCE_DB_PWD>@192.168.60.10/glance_pp" openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_idle_timeout 30 openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor "keystone+cachemanagement" openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host 192.168.60.111 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://192.168.60.111:35357/v2.0 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password GLANCE_PASS openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 0.0.0.0 openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host 192.168.60.111 # TO CHANGE IN FUTURE (IceHouse) #openstack-config --set /etc/glance/glance-api.conf DEFAULT notifier_strategy rabbit #openstack-config --set /etc/glance/glance-api.conf DEFAULT rabbit_host 192.168.60.111 openstack-config --set /etc/glance/glance-api.conf DEFAULT notifier_strategy noop openstack-config --set /etc/glance/glance-api.conf DEFAULT workers 4
glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf \ DEFAULT sql_connection "mysql://glance_pp:<GLANCE_DB_PWD>@192.168.60.10/glance_pp" openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_idle_timeout 30 openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host 192.168.60.111 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http openstack-config --set /etc/glance/glance-registry.conf \ keystone_authtoken auth_uri http://192.168.60.111:35357/v2.0 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password GLANCE_PASS openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 0.0.0.0
To prevent unpriviledges users to register public image, add the following policy in /etc/glance/policy.json
:
"publicize_image": "role:admin"
While still logged into the controller node, prepare the paths:
chown -R glance /var/log/glance chown -R glance /var/run/glance
… and initialize the Glance's database:
su glance -s /bin/sh -c "glance-manage db_sync"
If you get an error like this:
2013-12-20 09:38:23.855 2002 TRACE glance (self.version, startver)) 2013-12-20 09:38:23.855 2002 TRACE glance InvalidVersionError: 5 is not 6 2013-12-20 09:38:23.855 2002 TRACE glance
just execute the same command once again.
Start and enable the Glance services:
service openstack-glance-registry start service openstack-glance-api start chkconfig openstack-glance-registry on chkconfig openstack-glance-api on
In order to check that Glance is correctly installed, login into any machines where you've installed the Glance's command line and source the keystone_admin.sh
script that you've copied from the controller node; then try these commands:
# wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img [...] Saving to: “cirros-0.3.1-x86_64-disk.img.1” [...] 2013-12-06 12:25:03 (3.41 MB/s) - “cirros-0.3.1-x86_64-disk.img.1” saved [13147648/13147648] # glance image-create --name=cirros \ --disk-format=qcow2 \ --container-format=bare \ --is-public=True \ < cirros-0.3.1-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | d972013792949d0d3ba628fbe8685bce | | container_format | bare | | created_at | 2014-04-21T19:06:17 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | 75ba84ce-9e13-4c1c-872c-9e7ad3a361c8 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 87826b28976543d1858455d5142bf8a5 | | protected | False | | size | 13147648 | | status | active | | updated_at | 2014-04-21T19:06:19 | +------------------+--------------------------------------+ # glance index ID Name Disk Format Container Format Size ------------------------------------ ------------------------------ -------------------- -------------------- -------------- 75ba84ce-9e13-4c1c-872c-9e7ad3a361c8 cirros qcow2 bare 13147648
Login into the controller node, and source the script keystone_admin.sh that you created above:
source keystone_admin.sh
Add NOVA service, user and endpoint to Keystone's database:
keystone user-create --name nova --pass NOVA_PASS keystone user-role-add --user nova --role admin --tenant services keystone service-create --name nova --type compute --description "OpenStack Compute Service" SERVICE_NOVA_ID=`keystone service-list|grep nova|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NOVA_ID \ --publicurl http://90.147.77.39:8774/v2/%\(tenant_id\)s \ --adminurl http://192.168.60.111:8774/v2/%\(tenant_id\)s \ --internalurl http://192.168.60.111:8774/v2/%\(tenant_id\)s keystone service-create --name nova_ec2 --type ec2 --description "EC2 Service" SERVICE_EC2_ID=`keystone service-list|grep nova_ec2|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_EC2_ID \ --publicurl http://90.147.77.39:8773/services/Cloud \ --adminurl http://192.168.60.111:8773/services/Admin \ --internalurl http://192.168.60.111:8773/services/Cloud
Login into the controller node and modify the relevant configuration files:
nova.conf:
openstack-config --set /etc/nova/nova.conf \ database max_pool_size 30 openstack-config --set /etc/nova/nova.conf DEFAULT rpc_conn_pool_size 50 openstack-config --set /etc/nova/nova.conf \ database connection "mysql://nova_pp:<NOVA_DB_PWD>@192.168.60.10/nova_pp" openstack-config --set /etc/nova/nova.conf database idle_timeout 30 openstack-config --set /etc/nova/nova.conf \ DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu openstack-config --set /etc/nova/nova.conf \ DEFAULT rabbit_hosts 192.168.60.111:5672 openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.60.111 openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.60.111 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 90.147.77.39 openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 90.147.77.39 openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.60.111 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT api_paste_config /etc/nova/api-paste.ini openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers 192.168.60.111:11211 openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata openstack-config --set /etc/nova/nova.conf DEFAULT ec2_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf DEFAULT ec2_listen_port 8773 openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0 openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
api-paste.ini:
openstack-config --set /etc/nova/api-paste.ini \ filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host 192.168.60.111 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_port 35357 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol http openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_uri http://192.168.60.111:5000/v2.0 openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password NOVA_PASS
While still logged into the controller node, initialize the database:
su nova -s /bin/sh -c "nova-manage db sync"
Modify the file /etc/nova/policy.json
so that users can manage only their VMs:
# diff -c /etc/nova/policy.json /etc/nova/policy.json.orig *** /etc/nova/policy.json 2014-06-03 12:17:38.313909830 +0200 --- /etc/nova/policy.json.orig 2014-06-03 12:16:27.573839167 +0200 *************** *** 1,8 **** { "context_is_admin": "role:admin", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", ! "admin_or_user": "is_admin:True or user_id:%(user_id)s", ! "default": "rule:admin_or_user", "cells_scheduler_filter:TargetCellFilter": "is_admin:True", --- 1,7 ---- { "context_is_admin": "role:admin", "admin_or_owner": "is_admin:True or project_id:%(project_id)s", ! "default": "rule:admin_or_owner", "cells_scheduler_filter:TargetCellFilter": "is_admin:True", *************** *** 10,16 **** "compute:create:attach_network": "", "compute:create:attach_volume": "", "compute:create:forced_host": "is_admin:True", - "compute:get": "rule:admin_or_owner", "compute:get_all": "", "compute:get_all_tenants": "", "compute:unlock_override": "rule:admin_api", --- 9,14 ----
Create the file ~/services-nova.sh
with the following content:
#!/bin/sh service openstack-nova-api $1 service openstack-nova-cert $1 service openstack-nova-consoleauth $1 service openstack-nova-scheduler $1 service openstack-nova-conductor $1 service openstack-nova-novncproxy $1
Turn ON and enable the nova services:
chmod +x ~/services-nova.sh ~/services-nova.sh start chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-conductor on chkconfig openstack-nova-novncproxy on
From your desktop, or wherever you've copied the keystone_admin.sh
and installed the NOVA's command line, after having sourced this script try to execute:
bash-4.1$ nova service-list +------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+ | nova-conductor | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up | 2014-04-23T07:21:28.000000 | None | | nova-consoleauth | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up | 2014-04-23T07:21:27.000000 | None | | nova-scheduler | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up | 2014-04-23T07:21:27.000000 | None | | nova-cert | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up | 2014-04-23T07:21:28.000000 | None | +------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+ bash-4.1$ nova availability-zone-list +------------------------------------+----------------------------------------+ | Name | Status | +------------------------------------+----------------------------------------+ | internal | available | | |- cloudpp-areapd.cloud.pd.infn.it | | | | |- nova-conductor | enabled :-) 2014-04-23T07:28:18.000000 | | | |- nova-consoleauth | enabled :-) 2014-04-23T07:28:18.000000 | | | |- nova-scheduler | enabled :-) 2014-04-23T07:28:18.000000 | | | |- nova-cert | enabled :-) 2014-04-23T07:28:18.000000 | +------------------------------------+----------------------------------------+ bash-4.1$ nova endpoints +-------------+----------------------------------+ | glance | Value | +-------------+----------------------------------+ | adminURL | http://90.147.77.39:9292 | | region | regionOne | | publicURL | http://90.147.77.39:9292 | | internalURL | http://90.147.77.39:9292 | | id | 1a031e89a8f44016a8d69156ef6e8f1d | +-------------+----------------------------------+ +-------------+--------------------------------------------------------------+ | nova | Value | +-------------+--------------------------------------------------------------+ | adminURL | http://90.147.77.39:8774/v2/87826b28976543d1858455d5142bf8a5 | | region | regionOne | | id | 47fd1715102a4d55ab308d5dd0fb39bc | | serviceName | nova | | internalURL | http://90.147.77.39:8774/v2/87826b28976543d1858455d5142bf8a5 | | publicURL | http://90.147.77.39:8774/v2/87826b28976543d1858455d5142bf8a5 | +-------------+--------------------------------------------------------------+ +-------------+----------------------------------+ | keystone | Value | +-------------+----------------------------------+ | adminURL | http://90.147.77.39:35357/v2.0 | | region | regionOne | | publicURL | http://90.147.77.39:5000/v2.0 | | internalURL | http://90.147.77.39:5000/v2.0 | | id | 0de5a59d93e74f32b41942a4fe3fe3b6 | +-------------+----------------------------------+
Login into the controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh that you created above:
source ~/keystonerc_admin
Then, create the endpoint, service and user information in the Keystone's database for Neutron:
keystone user-create --name neutron --pass NEUTRON_PASS keystone user-role-add --user neutron --role admin --tenant services keystone service-create --name neutron --type network --description "OpenStack Networking Service" SERVICE_NEUTRON_ID=`keystone service-list|grep neutron|awk '{print $2}'` keystone endpoint-create --service-id $SERVICE_NEUTRON_ID \ --publicurl "http://90.147.77.39:9696" \ --adminurl "http://192.168.60.111:9696" \ --internalurl "http://192.168.60.111:9696"
Configure system's networking properties:
sed -i 's+^net\.ipv4.ip_forward+#net\.ipv4.ip_forward+' /etc/sysctl.conf sed -i 's+^net\.ipv4\.conf\.default\.rp_filter+#net\.ipv4\.conf\.default\.rp_filter+' /etc/sysctl.conf sed -i 's+^net\.ipv4\.conf\.all\.rp_filter+#net\.ipv4\.conf\.all\.rp_filter+' /etc/sysctl.conf cat << EOF >> /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 EOF sysctl -p service network restart
Login into the controller node and modify the configuration files.
neutron.conf:
# Let's choose the kind of authentication openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.60.111 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS openstack-config --set /etc/neutron/neutron.conf \ keystone_authtoken auth_url http://192.168.60.111:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf \ keystone_authtoken auth_uri http://192.168.60.111:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_hosts 192.168.60.111:5672 openstack-config --set /etc/neutron/neutron.conf database connection "mysql://neutron_pp:<NEUTRON_DB_PWD>@192.168.60.10/neutron_pp" openstack-config --set /etc/neutron/neutron.conf database max_pool_size 30 openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_conn_pool_size 50 openstack-config --set /etc/neutron/neutron.conf \ DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 #openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose False
api-paste.ini:
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken \ paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host 192.168.60.111 openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_uri http://192.168.60.111:5000 openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password NEUTRON_PASS
l3-agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True
dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
dnsmasq-neutron.conf
cat << EOF >> /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,1400 EOF
This sets MTU to 1400. This is relevant only if, for some reasons, it is not possible to set the MTU of the data network to 9000. At any rate in this case you will have other performance problems (e.g. for Cinder)
ovs_neutron_plugin.ini:
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tenant_network_type gre openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs enable_tunneling True openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \ securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs local_ip 192.168.61.111 openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs integration_bridge br-int openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_bridge br-tun ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini
metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://192.168.60.111:5000/v2.0 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name services openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password NEUTRON_PASS openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.60.111 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_PASS
nova.conf:
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://192.168.60.111:9696 openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name services openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password NEUTRON_PASS openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://192.168.60.111:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT \ linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.NoopFirewallDriver #openstack-config --set /etc/nova/nova.conf DEFAULT \ firewall_driver nova.virt.firewall.OVSHybridIptablesFirewallDriver openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
Turn the openvswitch
ON:
service openvswitch start chkconfig openvswitch on
Create the bridges:
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-br br-ex-2
Configure the em3
interface (the one with public IP) without an IP address and in promiscuous mode. Additionally, you must set the newly created br-ex
interface to have the IP address that formerly belonged to em3
:
cd /etc/sysconfig/network-scripts mv ifcfg-em3 em3.orig cat << EOF >> ifcfg-em3 DEVICE=em3 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes BOOTPROTO=none PROMISC=yes EOF cat << EOF > ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static DNS1=192.84.143.16 GATEWAY=90.147.77.254 IPADDR=90.147.77.39 NETMASK=255.255.255.0 ONBOOT=yes EOF
Configure the em4
interface without an IP address and in promiscuous mode. Additionally, you must set the newly created br-ex-2
interface:
cat << EOF > ifcfg-em4 DEVICE=em4 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex-2 ONBOOT=yes BOOTPROTO=none PROMISC=yes EOF cat << EOF > ifcfg-br-ex-2 DEVICE=br-ex-2 DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static DNS1=192.84.143.16 IPADDR=172.25.25.2 NETMASK=255.255.255.0 ONBOOT=yes EOF
Reboot:
shutdown -r now
Start (and enable the start at boot time) of the Neutron services:
service neutron-server start service neutron-dhcp-agent start service neutron-l3-agent start service neutron-metadata-agent start service neutron-openvswitch-agent start chkconfig neutron-server on chkconfig neutron-dhcp-agent on chkconfig neutron-l3-agent on chkconfig neutron-metadata-agent on chkconfig neutron-openvswitch-agent on
Check if the Neutron agents are working:
# neutron agent-list +--------------------------------------+--------------------+---------------------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+---------------------------------+-------+----------------+ | 2a20e0ce-71c1-493c-b66a-673cbe19f779 | Open vSwitch agent | cloudpp-areapd.cloud.pd.infn.it | :-) | True | | 344dce81-936f-4ed3-8d5c-545c75dab705 | DHCP agent | cloudpp-areapd.cloud.pd.infn.it | :-) | True | | 8d8c04b1-300b-4305-b8fc-79da1a280916 | L3 agent | cloudpp-areapd.cloud.pd.infn.it | :-) | True | +--------------------------------------+--------------------+---------------------------------+-------+----------------+
Create the networks (note that they don't have to be shared):
neutron net-create Ext --router:external=True neutron net-create Ext-lan --router:external=True
Configure 2 L3 agents for the two networks:
cat << EOF >> /etc/neutron/l3_agent.ini host=first gateway_external_network_id = <id of Ext> ; use: neutron net-show Ext handle_internal_only_routers = True external_network_bridge = br-ex metadata_port = 9697 EOF cat << EOF > /etc/neutron/l3_agent-2.ini [DEFAULT] debug = False host = second interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True gateway_external_network_id = 9b10c3be-961d-4661-ac0c-00c6470d65d9 handle_internal_only_routers = False external_network_bridge = br-ex-2 metadata_port = 9698 send_arp_for_ha = 3 periodic_interval = 40 periodic_fuzzy_delay = 5 enable_metadata_proxy = True EOF
Make sure that /etc/neutron/l3_agent-2.ini
and /etc/neutron/l3_agent.ini
have the same ownership and mode.
Modify /etc/hosts, to define the first
and second
names. The IP address is the one of the management interface:
cat << EOF >> /etc/hosts 192.168.60.111 first 192.168.60.111 second
Create the script /etc/init.d/neutron-l3-agent2
.
# diff /etc/init.d/neutron-l3-agent /etc/init.d/neutron-l3-agent2 18c18 < "/etc/$proj/l3_agent.ini" \ --- > "/etc/$proj/l3_agent-2.ini" \ 21c21 < pidfile="/var/run/$proj/$prog.pid" --- > pidfile="/var/run/$proj/$prog-2.pid" 25c25 < lockfile=/var/lock/subsys/$prog --- > lockfile=/var/lock/subsys/$prog-2 33c33 < daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile" --- > daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin.log-2 ${configs[@]/#/--config-file } &>/dev/null & echo \$! > $pidfile"
Enable the start of boot of the second l3 agent and restart both l3 agents:
chkconfig neutron-l3-agent2 on service neutron-l3-agent restart service neutron-l3-agent2 restart
Verify that 2 L3 agents are running:
# neutron agent-list +--------------------------------------+--------------------+---------------------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+---------------------------------+-------+----------------+ | 24dfcaf6-21f5-4ed5-8117-e00c3472a23e | L3 agent | first | :-) | True | | 2a20e0ce-71c1-493c-b66a-673cbe19f779 | Open vSwitch agent | cloudpp-areapd.cloud.pd.infn.it | :-) | True | | 344dce81-936f-4ed3-8d5c-545c75dab705 | DHCP agent | cloudpp-areapd.cloud.pd.infn.it | :-) | True | | 8371acad-62bb-48b7-a4db-92be8aa0ac97 | L3 agent | second | :-) | True | +--------------------------------------+--------------------+---------------------------------+-------+----------------+
Create the subnets and the routers:
neutron subnet-create Ext 90.147.77.0/24 --enable-dhcp=False --allocation-pool start=90.147.77.101,end=90.147.77.120 --gateway=90.147.77.254 --name sub-Ext neutron subnet-create Ext-lan 172.25.25.0/24 --enable-dhcp=False --allocation-pool start=172.25.25.3,end=172.25.25.3 --gateway=172.25.25.254 --name sub-Ext-lan neutron router-create router-wan neutron router-create router-lan neutron router-gateway-set router-wan Ext neutron router-gateway-set router-lan Ext-lan neutron router-gateway-set --disable-snat router-lan Ext-lan
Create a "workaround" network to be used for uCernVM instances (because of the NOZEROCONF
issue):
neutron net-create workaround-cernvm neutron subnet-create workaround-cernvm 169.254.0.0/16 --enable-dhcp=True \ --name sub-workaround-cernvm --allocation-pool start=169.254.169.1,end=169.254.169.1 \ --gateway=169.254.169.254 neutron router-interface-add router-lan sub-workaround-cernvm
Create the networks for the tenants:
neutron net-create --tenant-id ae4b3654ea08441fabe232390ae908b6 alice-lan neutron subnet-create alice-lan 10.62.13.0/24 --enable-dhcp=True --name sub-alice-lan --tenant-id ae4b3654ea08441fabe232390ae908b6 neutron router-interface-add router-lan sub-alice-lan neutron net-create --tenant-id ae4b3654ea08441fabe232390ae908b6 alice-wan neutron subnet-create alice-wan 10.61.13.0/24 --enable-dhcp=True --name sub-alice-wan --tenant-id ae4b3654ea08441fabe232390ae908b6 neutron router-interface-add router-wan sub-alice-wan
Add via the dashboad the DNS
Login into the controller node and prepare the glusterfs part:
yum install -y glusterfs-fuse echo "192.168.60.100:/volume-cinder-pp" > /etc/cinder/shares chown root:cinder /etc/cinder/shares chmod 0640 /etc/cinder/shares
Login into the controller node, or wherever you've installed the Keystone's command line, and source the script keystone_admin.sh
that you created above:
source ~/keystonerc_admin
Then, create the endpoint, service and user information in the Keystone's database for Cinder:
keystone user-create --name cinder --pass CINDER_PASS keystone user-role-add --user cinder --role admin --tenant services keystone service-create --name cinder --type volume --description "Cinder Volume Service" keystone endpoint-create --service cinder \ --publicurl "http://90.147.77.39:8776/v1/\$(tenant_id)s" \ --adminurl "http://192.168.60.111:8776/v1/\$(tenant_id)s" \ --internalurl "http://192.168.60.111:8776/v1/\$(tenant_id)s" keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2" keystone endpoint-create --service cinderv2 \ --publicurl=http://90.147.77.39:8776/v2/%\(tenant_id\)s \ --internalurl=http://192.168.60.111:8776/v2/%\(tenant_id\)s \ --adminurl=http://192.168.60.111:8776/v2/%\(tenant_id\)s
Login into the controller node and modify the configuration files.
cinder.conf:
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host 192.168.60.111 openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password CINDER_PASS openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu openstack-config --set /etc/cinder/cinder.conf DEFAULT rabbit_hosts 192.168.60.111:5672 openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_idle_timeout 30 openstack-config --set /etc/cinder/cinder.conf DEFAULT rootwrap_config /etc/cinder/rootwrap.conf openstack-config --set /etc/cinder/cinder.conf DEFAULT sql_connection "mysql://cinder_pp:<CINDER_DB_PWD>@192.168.60.10/cinder_pp" openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_sparsed_volumes true openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host 192.168.60.111 #openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen 192.168.60.111 openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder
api-paste.ini:
openstack-config --set /etc/cinder/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/cinder/api-paste.ini filter:authtoken auth_host 192.168.60.111 openstack-config --set /etc/cinder/api-paste.ini filter:authtoken auth_port 35357 openstack-config --set /etc/cinder/api-paste.ini filter:authtoken auth_protocol http openstack-config --set /etc/cinder/api-paste.ini filter:authtoken auth_uri http://192.168.60.111:5000 openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_user cinder openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_password CINDER_PASS
Add the following line to the beginning of the /etc/tgt/targets.conf file
, if it is not already present:
include /etc/cinder/volumes/*
Initialize the Cinder database:
su cinder -s /bin/sh -c "cinder-manage db sync"
And finally start the services:
service openstack-cinder-api start chkconfig openstack-cinder-api on service openstack-cinder-scheduler start chkconfig openstack-cinder-scheduler on service openstack-cinder-volume start service tgtd start chkconfig openstack-cinder-volume on chkconfig tgtd on
Modify the file /etc/openstack-dashboard/local_settings
: look for the CACHES string, and substitute whatever is there with:
CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '127.0.0.1:11211', } }
Note that the TCP port 11211 must match that one contained in the file /etc/sysconfig/memcached
:
PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS=""
Now, look for the string OPENSTACK_HOST
; set it:
OPENSTACK_HOST = "192.168.60.111"
And finally the the ALLOWED_HOST
parameter:
ALLOWED_HOSTS = ['*']
Edit /etc/httpd/conf/httpd.conf
setting:
Listen 90.147.77.39:80 ServerName cloudpp-areapd.pd.infn.it:80
Edit /etc/httpd/conf.d/openstack-dashboard.conf
adding at the end:
RedirectMatch permanent ^/$ /dashboard/
Create /etc/httpd/conf.d/rootredirect.conf
with this content:
RedirectMatch ^/$ /dashboard/
Start the webServer and enable its start at boot time:
service httpd start service memcached start chkconfig httpd on chkconfig memcached on
Install host cert-key in /etc/grid-security
. They should have these mode and ownership:
-rw-r--r-- 1 root root 1476 Apr 30 12:02 /etc/grid-security/hostcert.pem -rw------- 1 root root 916 Apr 30 11:54 /etc/grid-security/hostkey.pem
Install mod-ssl:
yum -y install mod-ssl
Edit /etc/httpd/conf.d/ssl.conf
setting the paths of the host cert-key file and replacing 443 with 8443, i.e.:
Listen 8443 <VirtualHost _default_:8443> SSLCertificateFile /etc/grid-security/hostcert.pem SSLCertificateKeyFile /etc/grid-security/hostkey.pem
Add the following lines at the end of /etc/httpd/conf/httpd.conf
:
RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^/?(.*) https://%{SERVER_NAME}:8443/$1 [R,L]
Restart httpd:
service httpd restart