User Tools

Site Tools


progetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temporary

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
progetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temporary [2014/09/29 13:07] – [Further pre-requisite on data network interface] dorigoa@infn.itprogetti:cloud-areapd:keystone-glance_high_availability:openstack_ha:temporary [2015/03/27 08:31] (current) – [Install OpenStack software (both nodes)] dorigoa@infn.it
Line 1: Line 1:
 +
 +======= Installation and Configuration of OpenStack Network Node =======
 +
 +Author:
 +   * Alvise Dorigo (INFN Padova)
 +
 +===== Prerequisites =====
 +   * [[http://wiki.infn.it/progetti/cloud-areapd/keystone-glance_high_availability/openstack_ha/controller_node|Controller node install & setup]]
 +
 +Two nodes with:
 +   * Updated SL6/CentOS6 (6.4 or 6.5) 
 +   * Make sure that yum autoupdate is disabled
 +<code bash>
 +root@controller-01 ~]# grep ENA /etc/sysconfig/yum-autoupdate
 +# ENABLED
 +ENABLED="false"
 +</code> 
 +   * At least 20GB HD for operating system and OpenStack software and related log files 
 +   * SELinux configured as "Disabled" (''/etc/selinux/config'')
 +   * EPEL 6-8
 +   * A MySQL (possibly a HA cluster) endpoint each OpenStack service can connect to (in this guide we use our MySQL Percona cluster's IP ''192.168.60.10'')
 +   * A HAProxy/Keepalived cluster to use for load-balancing and Virtual IP (in this guide we use the IP ''192.168.60.40'' for mgmt net, ''90.147.77.40'' for public net)
 +   * Installed CA INFN certificate on both nodes
 +<code bash>
 +[root@network-01 ~]# ll /etc/grid-security/certificates/INFN-CA-2006.pem 
 +-rw-r--r-- 1 root root 1257 Jun  5 19:05 /etc/grid-security/certificates/INFN-CA-2006.pem
 +</code>
 + 
 +===== Naming conventions and networking assumptions =====
 +We assume that the network nodes have the following network setup:
 +   * They have 3 network interfaces connected to three different networks: **management network**, **public network**, **data network**
 +   * **Management network** is: ''192.168.60.0/24''
 +   * **Public network** (also called "**external network**") is: 90.147.77.0/24
 +   * **Data network** is: ''192.168.61.0/24''
 +   * First node is named: ''network-01.cloud.pd.infn.it'' (''192.168.60.42''), ''network-01.pd.infn.it'' (''90.147.77.42'')
 +   * Second node is named: ''network-02.cloud.pd.infn.it'' (''192.168.60.45''), ''network-02.pd.infn.it'' (''90.147.77.45'')
 +   * ''network-01'' data network's IP is ''192.168.61.42''
 +   * ''network-02'' data network's IP is ''192.168.61.45''
 +
 +==== Further pre-requisite on data network interface ====
 +In the net-interface configuration script for data network (something like ''/etc/sysconfig/network-scripts/ifcfg-XYZ'') put the following parameter:
 +<code bash>
 +MTU="9000"
 +</code>
 +===== Considerations for High Availability =====
 +To make the Neutron agents highly available, just repeat this procedure on another network node, changing the value for the only relevant parameter ''local_ip'' (the private IP on the data network), and putting the correct value for the public network in the ''ifcfg-br-ex'' file, as shown below...
 +===== Install OpenStack software (both nodes) =====
 +First install the YUM repo from RDO:
 +<code bash>
 +yum install -y http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
 +</code>
 +When the support to Havana is decomissioned, the repo changes. Then do the following:
 +<code bash>
 +yum -y install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-havana/rdo-release-havana-9.noarch.rpm
 +sed -i 's+openstack/+openstack/EOL/+' /etc/yum.repos.d/rdo-release.repo
 +</code>
 +Then install Openstack software and update ''iproute'' in order to support network namespaces:
 +<code bash>
 +yum -y install openstack-neutron openvswitch openstack-neutron-openvswitch
 +yum -y update iproute
 +</code>
 +===== Configure system's networking properties (both nodes) =====
 +<code bash>
 +sed -i 's+^net\.ipv4.ip_forward+#net\.ipv4.ip_forward+' /etc/sysctl.conf
 +sed -i 's+^net\.ipv4\.conf\.default\.rp_filter+#net\.ipv4\.conf\.default\.rp_filter+' /etc/sysctl.conf
 +sed -i 's+^net\.ipv4\.conf\.all\.rp_filter+#net\.ipv4\.conf\.all\.rp_filter+' /etc/sysctl.conf
 +cat << EOF >> /etc/sysctl.conf
 +net.ipv4.ip_forward=1
 +net.ipv4.conf.all.rp_filter=0
 +net.ipv4.conf.default.rp_filter=0
 +EOF
 +sysctl -p
 +service network restart
 +</code>
 +===== Configure Neutron agent services (both nodes) =====
 +In this section we customize several configuration files related to Neutron's agents.
 +
 +**neutron.conf**
 +<code bash>
 +# Let's choose the kind of authentication
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
 +# Let's define the IP address and TCP port of the keystone service (which is running on the controller node)
 +# We'll use the Controller node's Virtual IP to exploit the HA configuration
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host 192.168.60.40
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
 +# Let's define the credentials used by the Neutron agents to authenticate to the Keystone service
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name services
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password NEUTRON_PASS
 +# Let's use the RabbitMQ AMQP in HA mode
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_hosts 192.168.60.41:5672,192.168.60.44:5672
 +openstack-config --set /etc/neutron/neutron.conf agent root_helper "sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
 +# Let's define the MySQL endpoint and authentication credentials
 +openstack-config --set /etc/neutron/neutron.conf database connection "mysql://neutron:<NEUTRON_DB_PWD>@192.168.60.10/neutron"
 +# Let's define the L2 plugin type (Open vSwitch or LinuxBridge; we're using the OVS)
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
 +# The following parameter must contain the number of available dhcp agents, which is the number of network nodes. 2 in our case.
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 86400
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_ha_queues True
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT agent_down_time 75 
 +openstack-config --set /etc/neutron/neutron.conf agent report_interval 30 
 +</code>
 +**api-paste.ini**
 +<code bash>
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host 192.168.60.40
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_uri http://192.168.60.40:5000
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_tenant_name services
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_user neutron
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken admin_password NEUTRON_PASS
 +</code>
 +**l3-agent.ini**
 +<code bash>
 +openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
 +openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True
 +</code>
 +**dhcp_agent.ini**
 +<code bash>
 +openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
 +openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
 +openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
 +</code>
 +**metadata_agent.ini**
 +<code bash>
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://192.168.60.40:5000/v2.0
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name services
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password NEUTRON_PASS
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.60.40
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_PASS
 +</code>
 +**ovs_neutron_plugin.ini**
 +<code bash>
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tenant_network_type gre
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_id_ranges 1:1000
 +# In the following line set local_ip to the IP address of the NIC connected to the DATA NETWORK
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs local_ip 192.168.61.42
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs enable_tunneling True
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs integration_bridge br-int
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs tunnel_bridge br-tun
 +openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 +ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini
 +</code>
 +=== Optional ===
 +When using GRE the virtual instances can experience low network performances measured by iperf. This is because the ethernet packet is greatly used for GRE overhead information. To solve this problem you can increase to 9000 (this is a good value we've experienced) the MTU of the data network's switch, or do the following additional configuration:
 +<code bash>
 +openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
 +</code>
 +Create the dnsmasq's config file:
 +<code bash>
 +cat << EOF >> /etc/neutron/dnsmasq-neutron.conf
 +dhcp-option-force=26,1400
 +EOF
 +</code>
 +===== Configure Open vSwitch network bridging (both nodes) =====
 +Turn the ''openvswitch'' ON:
 +<code bash>
 +service openvswitch start
 +chkconfig openvswitch on
 +</code>
 +Create the bridges:
 +<code bash>
 +ovs-vsctl add-br br-int
 +ovs-vsctl add-br br-ex
 +</code>
 +Let's assume the ''eth0'' is the NIC attached to the external (public) network; move the public IP address from ''eth0'' to ''br-ex'' and connect them:
 +<code bash>
 +cd /etc/sysconfig/network-scripts
 +mv ifcfg-eth0 eth0.orig
 +cat << EOF >> ifcfg-eth0
 +DEVICE=eth0
 +TYPE=OVSPort
 +DEVICETYPE=ovs
 +OVS_BRIDGE=br-ex
 +ONBOOT=yes
 +BOOTPROTO=none
 +PROMISC=yes
 +EOF
 +
 +cat << EOF >> ifcfg-br-ex
 +DEVICE=br-ex
 +DEVICETYPE=ovs
 +TYPE=OVSBridge
 +BOOTPROTO=static
 +# change with your actual public IP address
 +IPADDR=90.147.77.42
 +NETMASK=255.255.255.0
 +ONBOOT=yes
 +EOF
 +
 +service network restart
 +cd -
 +</code>
 +
 +Turn the Neutron agents ON:
 +<code bash>
 +service neutron-dhcp-agent start
 +service neutron-l3-agent start
 +service neutron-metadata-agent start
 +service neutron-openvswitch-agent start
 +</code>
 +Enable the Neutron agents:
 +<code bash>
 +chkconfig neutron-dhcp-agent on
 +chkconfig neutron-l3-agent on
 +chkconfig neutron-metadata-agent on
 +chkconfig neutron-openvswitch-agent on
 +</code>
 +
 +===== Check agents' redundancy =====
 +When you've done, you should be able to see all the agents running on all network nodes where you've applied this procedure. Execute the following command while logged into the controller node, or wherever you've installed the Openstack CLI and copied the ''keystone_admin.sh'' created in the guide for the [[http://wiki.infn.it/progetti/cloud-areapd/keystone-glance_high_availability/openstack_ha/controller_node|controller node]]:
 +<code bash>
 +[root@controller-01 ~]# neutron agent-list
 ++--------------------------------------+--------------------+-----------------------------+-------+----------------+
 +| id                                   | agent_type         | host                        | alive | admin_state_up |
 ++--------------------------------------+--------------------+-----------------------------+-------+----------------+
 +| 188fe879-be8a-4390-b766-04e188e35c3c | L3 agent           | network-02.cloud.pd.infn.it | :-)   | True           |
 +| 42647a60-dbd0-4a85-942d-8fdbb0e2ae24 | Open vSwitch agent | network-01.cloud.pd.infn.it | :-)   | True           |
 +| cf6f7ec2-8700-498b-b62d-49d8b5616682 | DHCP agent         | network-02.cloud.pd.infn.it | :-)   | True           |
 +| dc249956-e81d-465c-b51f-cff0e1e04f05 | DHCP agent         | network-01.cloud.pd.infn.it | :-)   | True           |
 +| e196a6a2-8a3a-4bfe-b048-b50bee14761c | Open vSwitch agent | network-02.cloud.pd.infn.it | :-)   | True           |
 +| eb902101-8a16-43b5-87f8-b058530407f6 | L3 agent           | network-01.cloud.pd.infn.it | :-)   | True           |
 ++--------------------------------------+--------------------+-----------------------------+-------+----------------+
 +</code>
 +
 +===== Optional: Configure Neutron's agents for SSL =====
 +Configure files to use ''https'' and fully qualified hostname:
 +<code bash>
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host cloud-areapd.pd.infn.it
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol https
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri https://cloud-areapd.pd.infn.it:35357/
 +openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url https://cloud-areapd.pd.infn.it:35357/v2.0
 +openstack-config --set /etc/neutron/neutron.conf DEFAULT ssl_ca_file /etc/grid-security/certificates/INFN-CA-2006.pem
 +
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_host cloud-areapd.pd.infn.it
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_uri https://cloud-areapd.pd.infn.it:5000
 +openstack-config --set /etc/neutron/api-paste.ini filter:authtoken auth_protocol https
 +
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url https://cloud-areapd.pd.infn.it:5000/v2.0
 +openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_ca_cert /etc/grid-security/certificates/INFN-CA-2006.pem
 +</code>
 +Restart Neutron's agents:
 +<code bash>
 +service neutron-dhcp-agent restart
 +service neutron-l3-agent restart
 +service neutron-metadata-agent restart
 +service neutron-openvswitch-agent restart
 +</code>
 +=== Fix metadata agent ===
 +To address this [[https://bugs.launchpad.net/neutron/+bug/1263872|bug]], apply this [[https://review.openstack.org/#/c/79658/|patch]], or follow the instructions below:
 +<code bash>
 +curl -o agent.py https://raw.githubusercontent.com/CloudPadovana/SSL_Patches/master/agent.py
 +mv /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py.bak
 +cp agent.py /usr/lib/python2.6/site-packages/neutron/agent/metadata/agent.py
 +
 +service neutron-metadata-agent restart
 +</code>
  

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki