User Tools

Site Tools


progetti:cloud-areapd:operations:preproduction_cloud:installation_instructions_for_compute_node

Installation Instructions for Compute Node

Initial setup

Node installed via Foreman with:

  • OS: SL6.5
  • SELinux configured as "permissive" (/etc/selinux/config)
  • EPEL 6 repo installed
  • RDO Havana repo installed
  • em1 interface configured with address on management network
  • em2 connected to a port with VLAN 302 enabled

Disable SELinux

Configure SELINUX=disabled in /etc/selinux/config and then reboot.

After reboot, confirm that the getenforce command returns Disabled

# getenforce
Disabled

Configure IP tables to allow traffic through relevant TCP ports

Put the following lines in the file /etc/sysconfig/iptables just before the first line where there's a REJECT rule:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 5900:5999 -j ACCEPT 

Restart IPTables:

service iptables restart

Configure sysctl.conf

sed -i 's+^net\.ipv4\.conf\.default\.rp_filter+#net\.ipv4\.conf\.default\.rp_filter+' /etc/sysctl.conf
sed -i 's+^net\.ipv4\.conf\.all\.rp_filter+#net\.ipv4\.conf\.all\.rp_filter+' /etc/sysctl.conf
cat << EOF >> /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
EOF
sysctl -p

Configure Second Network interface

Configure the em2 interface with an address on the data network.

Edit the file /etc/sysconfig/network-scripts/ifcfg-em2, e.g.:

DEVICE="em2"
BOOTPROTO="static"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.61.131" 
NETMASK="255.255.255.0"

Reboot the machine:

shutdown -r now

Configure the filesystem for instances

Install glusterfs client:

yum install glusterfs-client 

Add in /etc/fstab the following line:

192.168.60.100:/volume-nova-pp /var/lib/nova/instances glusterfs defaults 1 1

Mount the file system:

mkdir -p /var/lib/nova/instances
mount -a

Install software

Install Nova and Neutron's packages:

yum -y install openstack-nova-compute openstack-utils openstack-neutron-openvswitch

Configure Nova

In the following, replace 192.168.60.112 with the IP management address of your compute node

nova.conf

openstack-config --set /etc/nova/nova.conf database connection "mysql://nova_pp:<NOVA_DB_PWD>@192.168.60.10/nova_pp"
openstack-config --set /etc/nova/nova.conf database max_pool_size 30
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_conn_pool_size 50
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host 192.168.60.111
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name services
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password NOVA_PASS
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend nova.openstack.common.rpc.impl_kombu
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_hosts 192.168.60.111:5672
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.60.112
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
# Put here the compute node management's IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.60.112
# The following IP must be changed with the public VIP for the controller (which runs the dashboard)
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://90.147.77.39:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host 192.168.60.111
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver nova.virt.libvirt.LibvirtDriver
openstack-config --set /etc/nova/nova.conf DEFAULT api_paste_config /etc/nova/api-paste.ini
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://192.168.60.111:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name services
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://192.168.60.111:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_password true
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0

api-paste.ini

openstack-config --set /etc/nova/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host 192.168.60.111
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_port 35357
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_protocol http
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name services
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password NOVA_PASS

Configure Neutron's agents

In the following, replace 192.168.61.112 with the IP data address of your compute node

neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
openstack-config --set /etc/neutron/neutron.conf DEFAULT api_paste_config /etc/neutron/api-paste.ini
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_hosts 192.168.60.111:5672
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.60.111
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://192.168.60.111:35357/v2.0
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services
openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
openstack-config --set /etc/neutron/neutron.conf database connection "mysql://neutron_pp:<NEUTRON_DB_PWD>@192.168.60.10/neutron_pp"

ovs_neutron_plugin.ini

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tenant_network_type gre
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs local_ip 192.168.61.112
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
    securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini

(Neutron L2 agent's) api-paste.ini

openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host 192.168.60.111
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name services 
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron 
openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password NEUTRON_PASS

Bridge creation and start of the services

Open vSwitch start and bridge creation

service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int

L2 Agent start:

service neutron-openvswitch-agent start
chkconfig neutron-openvswitch-agent on
chkconfig neutron-ovs-cleanup on

Start Services:

service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

Check it

When done, log into the controller node, or wherever you've installed the Openstack CLI and copied the keystone_admin.sh. Execute the commands:

# neutron agent-list
# neutron agent-list
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| id                                   | agent_type         | host                            | alive | admin_state_up |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| 24dfcaf6-21f5-4ed5-8117-e00c3472a23e | L3 agent           | first                           | :-)   | True           |
| 2a20e0ce-71c1-493c-b66a-673cbe19f779 | Open vSwitch agent | cloudpp-areapd.cloud.pd.infn.it | :-)   | True           |
| 344dce81-936f-4ed3-8d5c-545c75dab705 | DHCP agent         | cloudpp-areapd.cloud.pd.infn.it | :-)   | True           |
| 8371acad-62bb-48b7-a4db-92be8aa0ac97 | L3 agent           | second                          | :-)   | True           |
| 8de9e013-3271-48ef-941b-d5732a6d309b | Open vSwitch agent | cld-np-02.cloud.pd.infn.it      | :-)   | True           |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
 
# nova service-list
+------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host                            | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-cert        | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up    | 2014-04-29T10:12:59.000000 | -               |
| nova-consoleauth | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up    | 2014-04-29T10:12:59.000000 | -               |
| nova-scheduler   | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up    | 2014-04-29T10:12:59.000000 | -               |
| nova-conductor   | cloudpp-areapd.cloud.pd.infn.it | internal | enabled | up    | 2014-04-29T10:12:59.000000 | -               |
| nova-compute     | cld-np-02.cloud.pd.infn.it      | nova     | enabled | up    | 2014-04-29T10:12:51.000000 | -               |
+------------------+---------------------------------+----------+---------+-------+----------------------------+-----------------+

Install Nagios sensors

Create the file /usr/local/bin/nagios_check_ovs.sh with this content:

#!/bin/sh
#
# Nagios server
nagios_server="cld-nagios.cloud.pd.infn.it"
#
# Nagios client confifuration file
nagios_conf="/etc/nagios/send_nsca.cfg"
#
hostname=`/bin/hostname -s`
#
ovs_status=`/sbin/service openvswitch status | tr '\n' ';'`
ovs_status_retcode=$?
overall_msg=${ovs_status}
if [ $ovs_status_retcode -ne 0 ]; then
  echo -e "$hostname\tOpenvSwitch\t2\t${overall_msg}\n" | /usr/sbin/send_nsca -H ${nagios_server} -c ${nagios_conf}
  exit
fi

ovs_ofctl_out=`ovs-ofctl dump-flows br-tun`
if [[ ${ovs_ofctl_out} == *table=0* ]] && [[ ${ovs_ofctl_out} == *actions=resubmit* ]]; then
  overall_msg="${overall_msg} String 'table=0' 'actions=resubmit' found in ovs_ofctl outputput"
  retcode=0
else
  overall_msg="${overall_msg} Strings 'table=0' 'actions=resubmit' not found in ovs_ofctl outputput"
  retcode=2
fi
echo -e "$hostname\tOpenvSwitch\t${retcode}\t${overall_msg}\n" | /usr/sbin/send_nsca -H ${nagios_server} -c ${nagios_conf}

Copy as /usr/local/bin/check_kvm the script available at check_kvm.txt (the file has a txt extension just because this wiki doesn't like files without extensions).

Create the script /usr/local/bin/check_kvm_wrapper.sh with this content:

#!/bin/sh 
#Nagios server                                                                                                               
nagios_server="cld-nagios.cloud.pd.infn.it"
#
# Nagios client confifuration file                                                                                            
nagios_conf="/etc/nagios/send_nsca.cfg"
#
hostname=`/bin/hostname -s`
#                                                                                                                             
kvm_status=`/usr/local/bin/check_kvm`
kvm_status_retcode=$?
echo -e "$hostname\tKVM\t${kvm_status_retcode}\t${kvm_status}\n" | /usr/sbin/send_nsca -H ${nagios_server} -c ${nagios_conf}
cat << EOF > /etc/cron.d/nagios_check_ovs
0 */1 * * * root /usr/local/bin/nagios_check_ovs.sh
EOF
 
chmod +x /usr/local/bin/nagios_check_ovs.sh
chmod 0644 /etc/cron.d/nagios_check_ovs
cat << EOF > /etc/cron.d/nagios_check_kvm
0 */1 * * * root /usr/local/bin/check_kvm_wrapper.sh
EOF
 
chmod +x /usr/local/bin/check_kvm
chmod +x /usr/local/bin/check_kvm_wrapper.sh
chmod 0644 /etc/cron.d/nagios_check_kvm
progetti/cloud-areapd/operations/preproduction_cloud/installation_instructions_for_compute_node.txt · Last modified: 2014/11/18 13:18 by sgaravat@infn.it

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki