Table of Contents
Installation and Configuration of OpenStack Compute Node
Author:
- Alvise Dorigo (INFN Padova)
Prerequisites
At least one node with:
- Updated SL6/CentOS6 (6.4 or 6.5)
- Make sure that yum autoupdate is disabled
root@controller-01 ~]# grep ENA /etc/sysconfig/yum-autoupdate # ENABLED ENABLED="false"
- At least 20GB HD for operating system and OpenStack software and related log files
- Dedicated storage mounted on
/var/lib/nova/instances
where to store the instance images (particularly important to get live migration). At Gluster server side, the admin must have set:
gluster volume set <exported_volume_name> owner-uid 162 gluster volume set <exported_volume_name> owner-gid 162
where <exported_volume_name> is the volume exported from the Gluster server to the compute node (that will mounted on /var/lib/nova/instances
), and 162 is the usual ID e GID of the nova user.
Please consider this bug which seems to prevent the correct working of live migration with GlusterFS. "Fallback" to NFS protocol seems to be to only possible workaround so far.
- SELinux configured as "Disabled" (
/etc/selinux/config
) - EPEL 6-8
- A MySQL (possibly a HA cluster) endpoint each OpenStack service can connect to (in this guide we're using our MySQL Percona cluster's IP 192.168.60.10)
- A HAProxy/Keepalived cluster to use for load-balancing and Virtual IP (in this guide we're using the IP 192.168.60.40 for mgmt net and 90.147.77.40 for public net)
- Installed CA INFN certificate on both nodes
[root@network-01 ~]# ll /etc/grid-security/certificates/INFN-CA-2006.pem -rw-r--r-- 1 root root 1257 Jun 5 19:05 /etc/grid-security/certificates/INFN-CA-2006.pem
- Installed and active libvirt
yum -y install libvirt chkconfig libvirtd on service libvirtd start
- Activated virtualization on CPU (can be toggled in the BIOS menu):
cat /proc/cpuinfo | grep vmx flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority lsmod |grep kvm kvm_intel 54285 12 kvm 332980 1 kvm_intel lscpu |grep -i virtu Virtualization: VT-x
Note: kvm_intel
can be substituted by kvm_amd
, and VT-x
can be substituted by AMD-V
.
Naming conventions and networking assumptions
We assume that the compute node has the following network setup:
- It has two network interface connected to two different networks: management network and data network
- Management network is:
192.168.60.0/24
- Data network is:
192.168.61.0/24
- Node are named:
compute.cloud.pd.infn.it
(192.168.60.43
),compute.data.infn.it
(90.147.77.43
)
Further pre-requisite on data network interface
In the net-interface configuration script for data network (something like /etc/sysconfig/network-scripts/ifcfg-XYZ
) put the following parameter:
MTU="9000"
IPTables configuration
Execute the following commands:
# VNC's TCP ports iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5900:5999 -j ACCEPT # libvirtd's TCP ports iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 16509 -j ACCEPT # libvirtd's ephemeral ports iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 49152:49261 -j ACCEPT # permit ntpd's udp communications iptables -A INPUT -p udp -m state --state NEW -m udp --dport 123 -j ACCEPT mv /etc/sysconfig/iptables /etc/sysconfig/iptables.orig iptables-save > /etc/sysconfig/iptables chkconfig iptables on chkconfig ip6tables off service iptables restart
Naming conventions and networking assumptions
We assume that the compute node has the following setup:
- It has 2 network interfaces connected to two different networks: management network, Data network
- Management network is:
192.168.60.0/24
- Data network is: 192.168.61.0/24
- The node is named:
compute.cloud.pd.infn.it
(192.168.60.43
) - In this guide the controller's VIP on the management network is needed:
192.168.60.40
- In this guide the MySQL cluster's VIP on the management network is needed:
192.168.60.10
- In this guide the controller's public IP is needed:
90.147.77.40
Install software
Install Havana repo:
yum -y install http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
When the support to Havana is decomissioned, the repo changes. Then do the following:
yum -y install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-havana/rdo-release-havana-9.noarch.rpm sed -i 's+openstack/+openstack/EOL/+' /etc/yum.repos.d/rdo-release.repo
Install Nova and Neutron's packages, and update iproute
to support network namespaces:
yum -y install openstack-nova-compute openstack-utils openstack-neutron-openvswitch sysfsutils yum -y update iproute
Preliminary networking setup
sed -i 's+^net\.ipv4\.conf\.default\.rp_filter+#net\.ipv4\.conf\.default\.rp_filter+' /etc/sysctl.conf sed -i 's+^net\.ipv4\.conf\.all\.rp_filter+#net\.ipv4\.conf\.all\.rp_filter+' /etc/sysctl.conf cat << EOF >> /etc/sysctl.conf net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 EOF sysctl -p service network restart
Configure Nova
nova.conf
openstack-config --set /etc/nova/nova.conf database connection "mysql://nova:<NOVA_DB_PWD>@192.168.60.10/nova" openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357 openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672 openstack-config --set /etc/nova/nova.conf DEFAULT live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE # Change the following IP with the actual IP of the current compute node on the management network openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.60.43 openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True # vncserver_listen MUST be 0.0.0.0 otherwise the live migration won't work correctly # (http://docs.openstack.org/havana/config-reference/content/configuring-openstack-compute-basics.html#setting-flags-in-nova-conf-file) openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver # Change the following IP with the actual IP of the current compute node on the management network openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.60.43 openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://cloud-areapd.pd.infn.it:6080/vnc_auto.html openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.60.40 openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver nova.virt.libvirt.LibvirtDriver openstack-config --set /etc/nova/nova.conf DEFAULT api_paste_config /etc/nova/api-paste.ini openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://192.168.60.40:9696 openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name services openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password NEUTRON_PASS openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://192.168.60.40:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver # the following 3 lines enable admin's password inject openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_password true openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_key true openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1 openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_PASS openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_ha_queues True openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0 # this is a temporary workaround untill we understand a problem of cpu not compatible when live-migrating VMs openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_cpu_mode custom openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_cpu_model kvm64
api-paste.ini
openstack-config --set /etc/nova/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host 192.168.60.40 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_port 35357 openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol http openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password NOVA_PASS
Configure LibVirt to support Live Migration
Turn OFF the libvirtd
daemon :
service libvirtd stop
Execute:
cat << EOF >> /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 auth_tcp = "none" EOF
and
cat << EOF >> /etc/sysconfig/libvirtd LIBVIRTD_ARGS="--listen" EOF
Modify qemu.conf
:
cat << EOF >> /etc/libvirt/qemu.conf user="nova" group="nova" dynamic_ownership = 0 EOF
Configure Neutron's agents
As in the compute node the Neutron's L2 agent is running, some Neutron's configuration files need to be customized.
neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 openstack-config --set /etc/neutron/neutron.conf DEFAULT api_paste_config /etc/neutron/api-paste.ini openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.60.40 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://192.168.60.40:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf" openstack-config --set /etc/neutron/neutron.conf database connection "mysql://neutron:<NEUTRON_DB_PWD>@192.168.60.10/neutron" openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2 openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 86400 openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_ha_queues True openstack-config --set /etc/neutron/neutron.conf DEFAULT agent_down_time 75 openstack-config --set /etc/neutron/neutron.conf agent report_interval 30
ovs_neutron_plugin.ini
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tenant_network_type gre openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_id_ranges 1:1000 openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs enable_tunneling True openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs integration_bridge br-int openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_bridge br-tun # Change the following IP with the actual current compute node's IP on the data network openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs local_ip 192.168.61.43 openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini
(Neutron L2 agent's) api-paste.ini
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host 192.168.60.40 openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name services openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password NEUTRON_PASS
Bridge creation and start of the services
Open vSwitch start and bridge creation
service openvswitch start chkconfig openvswitch on ovs-vsctl add-br br-int
L2 Agent start:
service neutron-openvswitch-agent start chkconfig neutron-openvswitch-agent on chkconfig neutron-ovs-cleanup on
Start Services:
service libvirtd start service messagebus start service openstack-nova-compute start chkconfig libvirtd on chkconfig messagebus on chkconfig openstack-nova-compute on
Check all
When done, log into the controller node, or wherever you've installed the Openstack CLI and copied the keystone_admin.sh
into (which was created in the controller node installation procedure). Execute the commands:
[root@controller-01 ~]# neutron agent-list +--------------------------------------+--------------------+-----------------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-----------------------------+-------+----------------+ | 188fe879-be8a-4390-b766-04e188e35c3c | L3 agent | network-02.cloud.pd.infn.it | :-) | True | | 3c460fc1-c111-4be5-a37b-88aa7ffd265a | Open vSwitch agent | compute.cloud.pd.infn.it | :-) | True | | 42647a60-dbd0-4a85-942d-8fdbb0e2ae24 | Open vSwitch agent | network-01.cloud.pd.infn.it | :-) | True | | cf6f7ec2-8700-498b-b62d-49d8b5616682 | DHCP agent | network-02.cloud.pd.infn.it | :-) | True | | dc249956-e81d-465c-b51f-cff0e1e04f05 | DHCP agent | network-01.cloud.pd.infn.it | :-) | True | | e196a6a2-8a3a-4bfe-b048-b50bee14761c | Open vSwitch agent | network-02.cloud.pd.infn.it | :-) | True | | eb902101-8a16-43b5-87f8-b058530407f6 | L3 agent | network-01.cloud.pd.infn.it | :-) | True | +--------------------------------------+--------------------+-----------------------------+-------+----------------+ [root@controller-01 ~]# nova service-list +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+ | nova-scheduler | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:31.000000 | - | | nova-cert | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:29.000000 | - | | nova-consoleauth | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:30.000000 | - | | nova-conductor | controller-01.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:31.000000 | - | | nova-compute | compute.cloud.pd.infn.it | nova | enabled | up | 2014-03-22T10:10:30.000000 | - | | nova-consoleauth | controller-02.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:32.000000 | - | | nova-conductor | controller-02.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:33.000000 | - | | nova-cert | controller-02.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:32.000000 | - | | nova-scheduler | controller-02.cloud.pd.infn.it | internal | enabled | up | 2014-03-22T10:10:34.000000 | - | +------------------+--------------------------------+----------+---------+-------+----------------------------+-----------------+
Add SSH passwordless access from Compute node to virtual instances
This is needed to allow nova to resize virtual instances. Execute the following commands:
usermod -s /bin/bash nova mkdir -p -m 700 ~nova/.ssh chown nova.nova ~nova/.ssh cd ~nova/.ssh scp controller-01:/var/lib/nova/.ssh/* . chown nova.nova *
Optional: Configure Nova Compute for SSL
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol https openstack-config --set /etc/nova/nova.conf keystone_authtoken cafile /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT neutron_ca_certificates_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT cinder_ca_certificates_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url https://cloud-areapd.pd.infn.it:6080/vnc_auto.html openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url https://cloud-areapd.pd.infn.it:35357/v2.0 openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol https openstack-config --set /etc/neutron/neutron.conf DEFAULT ssl_ca_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol https openstack-config --set /etc/nova/nova.conf DEFAULT glance_host cloud-areapd.pd.infn.it openstack-config --set /etc/nova/nova.conf DEFAULT glance_protocol https openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers https://cloud-areapd.pd.infn.it:9292 openstack-config --set /etc/nova/nova.conf DEFAULT ssl_ca_file /etc/grid-security/certificates/INFN-CA-2006.pem #openstack-config --set /etc/nova/nova.conf ssl ca_file /etc/grid-security/certificates/INFN-CA-2006.pem openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_insecure true openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url https://cloud-areapd.pd.infn.it:9696
Restart L2 agent and Nova Compute
service openstack-nova-compute restart service neutron-openvswitch-agent restart
Fix metadata agent
To address this bug, apply this patch, or follow the instructions below:
curl -o agent.py https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/agent.py mv /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py.bak cp agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py service openstack-nova-compute restart service neutron-openvswitch-agent restart