Table of Contents
Havana-SL6 Testbed
Fully integrated Resource Provider INFN-PADOVA-STACK in production from 6 June 2014 to 21 July 2015.
EGI Monitoring
In case of failures here, check that the user running the Nagios probes is not belonging also to tenants other than EGI_ops
Local monitoring
Layout
- Controller + Network node: egi-cloud.pd.infn.it
- Compute node: cloud-01.local, cloud-05.local, gilda-11.local
- Network layout available here (authorized users only)
Setting up the controller/network node
[root@egi-cloud ~]# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [root@egi-cloud ~]# rpm -Uvh http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-8.noarch.rpm [root@egi-cloud ~]# yum clean all; yum update [root@egi-cloud ~]# rpm -e --nodeps yum-autoupdate [root@egi-cloud ~]# yum install -y openvswitch.x86_64 [root@egi-cloud ~]# yum install -y openstack-neutron-openvswitch.noarch [root@egi-cloud ~]# yum install -y openstack-packstack [root@egi-cloud ~]# yum install -y glusterfs-fuse [root@egi-cloud ~]# cat <<EOF >>/etc/hosts 192.168.115.11 cloud-01.local 192.168.115.12 cloud-02.local 192.168.115.13 cloud-03.local 192.168.115.14 cloud-04.local 192.168.115.15 cloud-05.local 192.168.115.16 gilda-11.local EOF [root@egi-cloud ~]# for i in `seq 11 16`; do ssh-copy-id root@192.168.115.$i; done [root@egi-cloud ~]# reboot
Management/Data Network Configuration
- Example of configuring eth1 on OS Compute
[root@cloud-05 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE="eth1" VLAN="yes" BOOTPROTO="none" IPADDR="192.168.115.12" NETMASK="255.255.255.0" HWADDR="00:25:90:73:BB:6F" ONBOOT="yes" TYPE="Ethernet" UUID="0329913a-3a0f-4d86-8603-c36fd159faee"
GlusterFS Configuration
- create properly the file /etc/yum.repos.d/glusterfs-epel.repo
- if partitions were already created by foreman (as usually is) :
mkdir /export/glance/brick mkdir /export/nova/brick mkdir /export/swift/brick mkdir /export/cinder/brick mkdir -p /var/lib/nova/instances mkdir -p /var/lib/glance/images mkdir -p /var/lib/cinder mkdir -p /var/lib/swift yum install glusterfs-server service glusterd start # now on cloud-01 only: gluster volume create novavolume transport tcp 192.168.115.11:/export/nova/brick gluster volume start novavolume gluster peer probe 192.168.115.12 gluster volume add-brick novavolume 192.168.115.12:/export/nova/brick ... gluster volume info cat <<EOF >> /etc/fstab 192.168.115.11:/glancevolume /var/lib/glance/images glusterfs defaults 1 1 192.168.115.11:/novavolume /var/lib/nova/instances glusterfs defaults 1 1 192.168.115.11:/cindervolume /var/lib/cinder glusterfs defaults 1 1 192.168.115.11:/swiftvolume /var/lib/swift glusterfs defaults 1 1 EOF mount -a # the same using 192.168.115.12 on cloud-05
- Server and bricks on cloud-01:
[root@cloud-01 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 15G 2.4G 12G 17% / tmpfs 24G 0 24G 0% /dev/shm /dev/sda2 600G 18G 583G 3% /export/glance /dev/sda6 646G 33M 646G 1% /export/swift 192.168.115.11:/cindervolume 3.7T 66M 3.7T 1% /var/lib/cinder 192.168.115.11:/swiftvolume 1.3T 65M 1.3T 1% /var/lib/swift 192.168.115.11:/glancevolume 1.2T 18G 1.2T 2% /var/lib/glance/images 192.168.115.11:/novavolume 1.2T 2.0G 1.2T 1% /var/lib/nova/instances /dev/sda3 600G 33M 600G 1% /export/nova /dev/sdb1 1.9T 33M 1.9T 1% /export/cinder
Installation
- We used the following guide, with CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre and CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:200. Also, use VLAN="yes" in both /etc/sysconfig/network-scripts/ifcfg-eth0.[19,303] files.
- After restarting the network we obtained:
[root@egi-cloud ~]# ifconfig br-ext Link encap:Ethernet HWaddr 00:1E:4F:1B:81:60 inet addr:90.147.77.223 Bcast:90.147.77.255 Mask:255.255.255.0 inet6 addr: fe80::f02c:c0ff:fe0c:2f01/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:9679047 errors:0 dropped:0 overruns:0 frame:0 TX packets:9803653 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:64963674780 (60.5 GiB) TX bytes:644228984 (614.3 MiB) br-int Link encap:Ethernet HWaddr A6:00:73:70:C0:4E inet6 addr: fe80::a400:73ff:fe70:c04e/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:5321 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560156 (547.0 KiB) TX bytes:468 (468.0 b) br-tun Link encap:Ethernet HWaddr AA:87:81:F4:7F:49 inet6 addr: fe80::4e7:aaff:fe16:c08d/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b) eth0 Link encap:Ethernet HWaddr 00:1E:4F:1B:81:60 inet6 addr: fec0::b:21e:4fff:fe1b:8160/64 Scope:Site inet6 addr: 2002:5a93:2915:b:21e:4fff:fe1b:8160/64 Scope:Global inet6 addr: fe80::21e:4fff:fe1b:8160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:109053971 errors:0 dropped:0 overruns:0 frame:0 TX packets:84964059 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:142212882024 (132.4 GiB) TX bytes:90668789758 (84.4 GiB) eth0.19 Link encap:Ethernet HWaddr 00:1E:4F:1B:81:60 inet addr:192.168.115.10 Bcast:192.168.115.255 Mask:255.255.255.0 inet6 addr: fe80::21e:4fff:fe1b:8160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26172577 errors:0 dropped:0 overruns:0 frame:0 TX packets:21794110 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:68240658392 (63.5 GiB) TX bytes:85695555522 (79.8 GiB) eth0.303 Link encap:Ethernet HWaddr 00:1E:4F:1B:81:60 inet6 addr: fe80::21e:4fff:fe1b:8160/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11614874 errors:0 dropped:0 overruns:0 frame:0 TX packets:11604145 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67399683364 (62.7 GiB) TX bytes:851684181 (812.2 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10321076 errors:0 dropped:0 overruns:0 frame:0 TX packets:10321076 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:54298775397 (50.5 GiB) TX bytes:54298775397 (50.5 GiB)
[root@egi-cloud ~]# ovs-vsctl show eb703996-b13c-422a-bcfc-efd331a7a0ca Bridge br-int Port "qr-281fc206-08" tag: 1 Interface "qr-281fc206-08" type: internal Port "qr-6126abd8-f6" tag: 2 Interface "qr-6126abd8-f6" type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "tap32a94d9a-97" tag: 1 Interface "tap32a94d9a-97" type: internal Port "tapc5fa1549-f7" tag: 2 Interface "tapc5fa1549-f7" type: internal Bridge br-ext Port br-ext Interface br-ext type: internal Port "eth0.303" Interface "eth0.303" Port "qg-a54c6d06-3f" Interface "qg-a54c6d06-3f" type: internal Port "qg-7ac160f7-54" Interface "qg-7ac160f7-54" type: internal Bridge br-tun Port br-tun Interface br-tun type: internal Port "gre-2" Interface "gre-2" type: gre options: {in_key=flow, local_ip="192.168.115.10", out_key=flow, remote_ip="192.168.115.12"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "gre-3" Interface "gre-3" type: gre options: {in_key=flow, local_ip="192.168.115.10", out_key=flow, remote_ip="192.168.115.11"} ovs_version: "1.11.0"
- We had to put CONFIG_SWIFT_INSTALL=n in order to complete successfully the installation with packstack, due to not yet understood problems related to Swift.
OpenStack configuration
- We stopped following the guide after the item "dhcp_agent.ini configuration", and created the two mandatory tenants (EGI_FCTF and EGI_ops) plus one tenant for any additional VO, a router and various nets and subnets from the Horizon Dashboard or from the command line, obtaining the following network topology:
- As an example, here below are reported the commands for creating the router and attaching to it the wenmr net:
[root@egi-cloud ~]# source keystonerc_admin [root@egi-cloud ~]# neutron router-create ext-to-vos [root@egi-cloud ~]# tenant=$(keystone tenant-list | awk '/WeNMR/ {print $2}') [root@egi-cloud ~]# neutron net-create int-wenmr --router:external=False --provider:network_type gre --provider:segmentation_id 103 --tenant_id $tenant [root@egi-cloud ~]# neutron subnet-create int-wenmr 10.0.3.0/24 --enable-dhcp=True --dns-nameserver 192.84.143.16 --allocation-pool start=10.0.3.2,end=10.0.3.254 \ --gateway=10.0.3.1 --name int-sub-wenmr --tenant_id $tenant [root@egi-cloud ~]# neutron router-interface-add ext-to-vos int-sub-wenmr
- According to this bug, we had to create the role "Member" to be able to create the tenants as admin user in the dashboard:
[root@egi-cloud ~]# source keystonerc_admin [root@egi-cloud ~]# keystone role-create --name="Member" [root@egi-cloud ~]# keystone role-create --name accounting [root@egi-cloud ~]# keystone user-create --name accounting --pass <password> # For each of the tenants, add the user with the accounting role [root@egi-cloud ~]# keystone user-role-add --user accounting --role accounting --tenant <tenant>
- Add the following user/role: admin/Member for any new tenant (for EGI_ops add also nova/Member,admin needed for vmcatcher/glancepush), e.g.:
[root@egi-cloud ~]# keystone user-role-add --user admin --role Member --tenant <tenant>
- Do not forget to add the new tenant to the /etc/osssmrc file (or /etc/caso/caso.conf if you are using caso) if you want to enable the APEL accounting (see APEL/SSM section)
- Remove “metadata” from “enabled_apis” as suggested in this guide
- Set the variables enable_isolated_metadata = True and enable_metadata_network = True in /etc/neutron/dhcp_agent.ini as suggested in this guide
- Doing "yum install -y sheepdog" to fix glance error message as suggested here
- Change MTU value to 1400 for all VMs by adding the line "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf" in /etc/neutron/dhcp_agent.ini and writing the line "dhcp-option-force=26,1400" in /etc/neutron/dnsmasq-neutron.conf; then perform a "service neutron-dhcp-agent restart"
- Enable VNC on compute nodes (if not set in the packstack answers file):
[root@cloud-01,cloud-05 ~]# sed -i 's|novncproxy_base_url=http://192.168.115.10:6080/vnc_auto.html|novncproxy_base_url=http://90.147.77.223:6080/vnc_auto.html|g' /etc/nova/nova.conf
- Set on all nodes the right libvirt_vif_driver:
[root@egi-cloud,cloud-01,cloud-05 ~]# sed -i 's|libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver|libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver|g' /etc/nova/nova.conf
EGI FedCloud specific configuration
(see EGI Doc and CHAIN-REDS Doc)
- Install CAs Certificates and the software for fetching the CRLs in both Controller (egi-cloud) and Compute (cloud-01,etc.) nodes:
[root@egi-cloud,cloud-01,05]# cd /etc/yum.repos.d [root@egi-cloud,cloud-01,05]# cat << EOF > egi-trustanchors.repo [EGI-trustanchors] name=EGI-trustanchors baseurl=http://repository.egi.eu/sw/production/cas/1/current/ gpgkey=http://repository.egi.eu/sw/production/cas/1/GPG-KEY-EUGridPMA-RPM-3 gpgcheck=1 enabled=1 EOF [root@egi-cloud,cloud-01,05]# yum install -y ca-policy-egi-core [root@egi-cloud,cloud-01,05]# yum install -y fetch-crl nogpgcheck [root@egi-cloud,cloud-01,05]# chkconfig fetch-crl-cron on [root@egi-cloud,cloud-01,05]# service fetch-crl-cron start
Install the OCCI API
(only on Controller node)
[root@egi-cloud ~]# yum install -y python-pip.noarch git [root@egi-cloud ~]# pip install pyssf [root@egi-cloud ~]# git config --global http.sslverify false [root@egi-cloud ~]# git clone https://github.com/EGI-FCTF/occi-os [root@egi-cloud ~]# cd occi-os/ [root@egi-cloud occi-os]# git checkout stable/havana [root@egi-cloud occi-os]# python setup.py install [root@egi-cloud ~]# cat <<EOF >>/etc/nova/api-paste.ini ######## # OCCI # ######## [composite:occiapi] use = egg:Paste#urlmap /: occiapppipe [pipeline:occiapppipe] pipeline = authtoken keystonecontext occiapp # with request body size limiting and rate limiting # pipeline = sizelimit authtoken keystonecontext ratelimit occiapp [app:occiapp] use = egg:openstackocci-havana#occi_app EOF
- Make sure the API occiapi is enabled in the /etc/nova/nova.conf configuration file:
[...] enabled_apis=ec2,occiapi,osapi_compute occiapi_listen_port=9000
- Add this line in /etc/nova/nova.conf (needed to allow floating-ip association via occi-client):
default_floating_pool=ext-net
- modify the /etc/nova/policy.json file in order to allow any user to get details about VMs not owned by her/him, while she/he cannot execute any other action (stop/suspend/pause/terminate/…) on them (see slide 7 here):
[root@egi-cloud]# sed -i 's|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",|"admin_or_owner": "is_admin:True or project_id:%(project_id)s",\n "admin_or_user": "is_admin:True or user_id:%(user_id)s",|g' /etc/nova/policy.json [root@egi-cloud]# sed -i 's|"default": "rule:admin_or_owner",|"default": "rule:admin_or_user",|g' /etc/nova/policy.json [root@egi-cloud]# sed -i 's|"compute:get_all": "",|"compute:get": "rule:admin_or_owner",\n "compute:get_all": "",|g' /etc/nova/policy.json
- and restart the opestack-nova-* services:
[root@egi-cloud]# cd /etc/init.d/ [root@egi-cloud]# for i in $(ls openstack-nova-*); do service $i restart; done
- Enable SSL connection on port 8787, by creating the file /etc/httpd/conf.d/proxy_http.load
[root@egi-cloud ~]# yum install mod_ssl [root@egi-cloud ~]# cat /etc/httpd/conf.d/proxy_http.load # # Proxy Server directives. Uncomment the following lines to # enable the proxy server: LoadModule proxy_module /usr/lib64/httpd/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib64/httpd/modules/mod_proxy_http.so LoadModule substitute_module /usr/lib64/httpd/modules/mod_substitute.so LoadModule filter_module /usr/lib64/httpd/modules/mod_filter.so Listen 8787 <VirtualHost _default_:8787> LogLevel warn ErrorLog /etc/httpd/logs/error.log CustomLog /etc/httpd/logs/ssl_access.log combined SSLEngine on SSLCertificateFile /etc/grid-security/hostcert.pem SSLCertificateKeyFile /etc/grid-security/hostkey.pem SSLCACertificatePath /etc/grid-security/certificates SSLCARevocationPath /etc/grid-security/certificates SSLVerifyClient optional SSLVerifyDepth 10 SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLOptions +StdEnvVars +ExportCertData <IfModule mod_proxy.c> # Do not enable proxying with ProxyRequests until you have secured # your server. # Open proxy servers are dangerous both to your network and to the # Internet at large. ProxyRequests Off <Proxy *> Order deny,allow Deny from all #Allow from .example.com </Proxy> ProxyPass / http://egi-cloud.pd.infn.it:9000/ connectiontimeout=600 timeout=600 ProxyPassReverse / http://egi-cloud.pd.infn.it:9000/ FilterDeclare OCCIFILTER FilterProvider OCCIFILTER SUBSTITUTE resp=Content-Type $text/ FilterProvider OCCIFILTER SUBSTITUTE resp=Content-Type $application/ <Location /> #AddOutputFilterByType SUBSTITUTE text/plain FilterChain OCCIFILTER Substitute s|http://egi-cloud.pd.infn.it:9000|https://egi-cloud.pd.infn.it:8787|n Order allow,deny Allow from all </Location> </IfModule> </VirtualHost>
Configure VO parameters for Keystone
- Create the VO/tenant/role mapping:
[root@egi-cloud]# cat<<EOF > /etc/keystone/voms.json { "fedcloud.egi.eu": { "tenant": "EGI_FCTF" }, "ops": { "tenant": "EGI_ops" } } EOF
- To accept VOMS proxy certificates for VOs fedcloud.egi.eu and ops, the
following directories/files need to be created:
[root@egi-cloud]# mkdir -p /etc/grid-security/vomsdir/fedcloud.egi.eu [root@egi-cloud]# mkdir -p /etc/grid-security/vomsdir/ops [root@egi-cloud]# cat <<EOF >/etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.egee.cesnet.cz.lsc /DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms1.egee.cesnet.cz /C=NL/O=TERENA/CN=TERENA eScience SSL CA EOF [root@egi-cloud]# cat <<EOF >/etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc /DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms2.grid.cesnet.cz /C=NL/O=TERENA/CN=TERENA eScience SSL CA EOF [root@egi-cloud]# cat <<EOF >/etc/grid-security/vomsdir/ops/lcg-voms.cern.ch.lsc /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch /DC=ch/DC=cern/CN=CERN Trusted Certification Authority EOF [root@egi-cloud]# cat <<EOF >/etc/grid-security/vomsdir/ops/voms.cern.ch.lsc /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch /DC=ch/DC=cern/CN=CERN Trusted Certification Authority EOF
- VOMS configuration options to be configured in /etc/keystone/keystone.conf under the [voms] section:
[root@egi-cloud]# cat <<EOF >>/etc/keystone/keystone.conf [voms] vomsdir_path = /etc/grid-security/vomsdir ca_path = /etc/grid-security/certificates voms_policy = /etc/keystone/voms.json vomsapi_lib = libvomsapi.so.1 autocreate_users = True EOF
- Check and in case install the host certificate for your server in /etc/grid-security/ directory:
[root@egi-cloud]# ls -l /etc/grid-security/host* -rw-r--r-- 1 root root 1424 Feb 25 15:19 /etc/grid-security/hostcert.pem -r-------- 1 root root 887 Feb 25 15:19 /etc/grid-security/hostkey.pem
Install OpenStack Keystone-VOMS module
[root@egi-cloud ~]# yum -y install voms m2crypto [root@egi-cloud ~]# git clone git://github.com/IFCA/keystone-voms.git -b stable/havana [root@egi-cloud ~]# cd keystone-voms/ [root@egi-cloud keystone-voms]# python setup.py install
- Enable the Keystone VOMS module
[root@egi-cloud]# cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/keystone-paste.ini - replace the line "#config_file = /usr/share/keystone/keystone-dist-paste.ini" with "config_file = /etc/keystone/keystone-paste.ini" in /etc/keystone/keystone.conf - add the VOMS filter in /etc/keystone/keystone-paste.ini: [filter:voms] paste.filter_factory = keystone_voms:VomsAuthNMiddleware.factory - add the VOMS filter to the public_api pipeline in /etc/keystone/keystone-paste.ini, probably before debug, ec2_extension, user_crud_extension and public_service components. In egi-cloud server is: [pipeline:public_api] pipeline = access_log sizelimit url_normalize token_auth admin_token_auth xml_body json_body voms ec2_extension user_crud_extension public_service - disable the plain keystone: [root@egi-cloud]# service openstack-keystone stop [root@egi-cloud]# chkconfig --level 2345 openstack-keystone off
- Configuring keystone SSL support
- enable SSL with client authentication in /etc/keystone/keystone.conf [ssl] enable = True certfile = /etc/grid-security/hostcert.pem keyfile = /etc/grid-security/hostkey.pem ca_certs = /etc/grid-security/certificates/INFN-CA-2006.pem cert_required = False - add the SSL enabled keystone URL in /etc/nova/api-paste.ini [filter:authtoken] auth_uri=https://egi-cloud.pd.infn.it:5000/
- Configuring the Apache server: create the file /etc/httpd/conf.d/keystone.conf
[root@egi-cloud ~]# cat /etc/httpd/conf.d/keystone.conf WSGIDaemonProcess keystone user=keystone group=nobody processes=3 threads=10 Listen 5000 <VirtualHost _default_:5000> LogLevel warn ErrorLog /etc/httpd/logs/error.log CustomLog /etc/httpd/logs/ssl_access.log combined SSLEngine on SSLCertificateFile /etc/grid-security/hostcert.pem SSLCertificateKeyFile /etc/grid-security/hostkey.pem SSLCACertificatePath /etc/grid-security/certificates SSLCARevocationPath /etc/grid-security/certificates SSLVerifyClient optional SSLVerifyDepth 10 SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLOptions +StdEnvVars +ExportCertData WSGIScriptAlias / /usr/lib/cgi-bin/keystone/main WSGIProcessGroup keystone </VirtualHost> Listen 35357 <VirtualHost _default_:35357> LogLevel warn ErrorLog /etc/httpd/logs/error.log CustomLog /etc/httpd/logs/ssl_access.log combined SSLEngine on SSLCertificateFile /etc/grid-security/hostcert.pem SSLCertificateKeyFile /etc/grid-security/hostkey.pem SSLCACertificatePath /etc/grid-security/certificates SSLCARevocationPath /etc/grid-security/certificates SSLVerifyClient optional SSLVerifyDepth 10 SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLOptions +StdEnvVars +ExportCertData WSGIScriptAlias / /usr/lib/cgi-bin/keystone/admin WSGIProcessGroup keystone </VirtualHost>
- Do not forget to uncomment the log_file line in keystone.conf, otherwise permission problems affecting keystone.log break everyting:
[root@egi-cloud ~]#sed -i 's|# log_file = /var/log/keystone/keystone.log|log_file = /var/log/keystone/keystone.log|g' /etc/keystone/keystone.conf
- Run keystone as WSGI application
[root@egi-cloud ~]# yum -y install python-paste-deploy [root@egi-cloud ~]# mkdir -p /usr/lib/cgi-bin/keystone [root@egi-cloud ~]# cp /usr/share/keystone/keystone.wsgi /usr/lib/cgi-bin/keystone/admin [root@egi-cloud ~]# cp /usr/share/keystone/keystone.wsgi /usr/lib/cgi-bin/keystone/main
- Add the OPENSSL_ALLOW_PROXY_CERTS attribute in /etc/init.d/httpd and restart the service:
[root@egi-cloud ~]# cat /etc/rc.d/init.d/httpd [...] # Start httpd in the C locale by default. export OPENSSL_ALLOW_PROXY_CERTS=1 HTTPD_LANG=${HTTPD_LANG-"C"} [...] [root@egi-cloud ~]# service httpd restart
- Adjust manually the keystone catalog in order the identity backend points to the correct URLs:
- public URL: https://egi-cloud.pd.infn.it:5000/v2.0
- admin URL: https://egi-cloud.pd.infn.it:35357/v2.0
- internal URL: https://egi-cloud.pd.infn.it:5000/v2.0
mysql> use keystone; mysql> update endpoint set url="https://egi-cloud.pd.infn.it:5000/v2.0" where url="http://90.147.77.223:5000/v2.0"; mysql> update endpoint set url="https://egi-cloud.pd.infn.it:35357/v2.0" where url="http://90.147.77.223:35357/v2.0"; mysql> select id,url from endpoint; should show lines with the above URLs.
- Replace http with https in auth_[protocol,uri,url] variables and IP address with egi-cloud.pd.infn.it in auth_[host,uri,url] in /etc/nova/nova.conf, /etc/nova/api-paste.ini, /etc/neutron/neutron.conf, /etc/neutron/api-paste.ini, /etc/neutron/metadata_agent.ini, /etc/cinder/cinder.conf, /etc/cinder/api-paste.ini, /etc/glance/glance-api.conf, /etc/glance/glance-registry.conf, /etc/glance/glance-cache.conf and restart the services of the Controller node
- Replace http with https in auth_[protocol,uri,url] variables and IP address with egi-cloud.pd.infn.it in auth_[host,uri,url] in /etc/nova/nova.conf and /etc/neutron/neutron.conf and restart the services openstack-nova-compute and neutron-openvswitch-agent of the Compute nodes.
- Comment the RedirecMatch line in /etc/httpd/conf.d/openstack-dashboard.conf and then:
[root@egi-cloud ~]# mv /etc/httpd/conf.d/rootredirect.conf /etc/httpd/conf.d/rootredirect.conf.bak
- Replace http with https in OPENSTACK_KEYSTONE_URL variable and put egi-cloud.pd.infn.it in OPENSTACK_HOST variable of /etc/openstack-dashboard/local_settings file.
- Do the following in both Controller and Compute nodes (it seems ca-bundle.crt is hardcoded in /usr/lib/python2.6/site-packages/requests/certs.py):
[root@egi-cloud,cloud-01 ~]# mv /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt.bak [root@egi-cloud,cloud-01 ~]# ln -s /etc/grid-security/certificates/INFN-CA-2006.pem /etc/pki/tls/certs/ca-bundle.crt
- A more elegant solution is the following (see here):
[root@egi-cloud,cloud-01 ~]# update-ca-trust enable [root@egi-cloud,cloud-01 ~]# cp /etc/grid-security/certificates/INFN-CA-2006.pem /etc/pki/ca-trust/source/anchors/ [root@egi-cloud,cloud-01 ~]# update-ca-trust extract
Install rOCCI Client
- We installed the rOCCI client on top of a EMI UI with small changes from this guide:
[root@prod-ui-02]# curl -L https://get.rvm.io | bash -s stable [root@prod-ui-02]# source /etc/profile.d/rvm.sh [root@prod-ui-02]# rvm install ruby [root@prod-ui-02]# gem install occi-cli
- As a normal user, an example of usage with basic commands is:
# create ssh-key for accessing VM as cloudadm: [prod-ui-02]# ssh-keygen -t rsa -b 2048 -f tmpfedcloud [prod-ui-02]# cat > tmpfedcloud.login << EOF #cloud-config users: - name: cloudadm sudo: ALL=(ALL) NOPASSWD:ALL lock-passwd: true ssh-import-id: cloudadm ssh-authorized-keys: - `cat tmpfedcloud.pub` EOF # create your VOMS proxy: [prod-ui-02]# voms-proxy-init -voms fedcloud.egi.eu -rfc ... # query the Cloud provider to see what is available (flavors and images): [prod-ui-02]# occi --endpoint https://egi-cloud.pd.infn.it:8787/ --auth x509 --voms --action describe --resource resource_tpl ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#m1-xlarge ]] title: Flavor: m1.xlarge term: m1-xlarge location: /m1-xlarge/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#small-1core2gb40gb ]] title: Flavor: small-1core2GB40GB term: small-1core2gb40gb location: /small-1core2gb40gb/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#m1-medium ]] title: Flavor: m1.medium term: m1-medium location: /m1-medium/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#m1-tiny ]] title: Flavor: m1.tiny term: m1-tiny location: /m1-tiny/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#small-1core3gb50gb ]] title: Flavor: small-1core3GB50GB term: small-1core3gb50gb location: /small-1core3gb50gb/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#m1-small ]] title: Flavor: m1.small term: m1-small location: /m1-small/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#m1-large ]] title: Flavor: m1.large term: m1-large location: /m1-large/ ##################################################################################################################### [[ http://schemas.openstack.org/template/resource#hpc ]] title: Flavor: hpc term: hpc location: /hpc/ ##################################################################################################################### [prod-ui-02]# occi --endpoint https://egi-cloud.pd.infn.it:8787/ --auth x509 --voms --action describe --resource os_tpl ########################################################################################################### [[ http://schemas.openstack.org/template/os#b5c5e97a-2ace-48b0-8ad1-17d9314adecc ]] title: Image: Windows term: b5c5e97a-2ace-48b0-8ad1-17d9314adecc location: /b5c5e97a-2ace-48b0-8ad1-17d9314adecc/ ########################################################################################################### [[ http://schemas.openstack.org/template/os#c64908ae-86ca-4be3-bcb3-6077aa6b5d32 ]] title: Image: CernVM3 term: c64908ae-86ca-4be3-bcb3-6077aa6b5d32 location: /c64908ae-86ca-4be3-bcb3-6077aa6b5d32/ ########################################################################################################### [[ http://schemas.openstack.org/template/os#29e5d9a0-9fed-44d8-96b7-5cacd35de31a ]] title: Image: Ubuntu 14.04 term: 29e5d9a0-9fed-44d8-96b7-5cacd35de31a location: /29e5d9a0-9fed-44d8-96b7-5cacd35de31a/ ########################################################################################################### [[ http://schemas.openstack.org/template/os#2b0d2bcf-f84d-406a-bb9d-6ac3bfd260d2 ]] title: Image: Fedora 20 term: 2b0d2bcf-f84d-406a-bb9d-6ac3bfd260d2 location: /2b0d2bcf-f84d-406a-bb9d-6ac3bfd260d2/ ########################################################################################################### [[ http://schemas.openstack.org/template/os#51c25157-2d1a-4e65-9fdf-1bf853666575 ]] title: Image: SL-6.5-x86_64-minimal term: 51c25157-2d1a-4e65-9fdf-1bf853666575 location: /51c25157-2d1a-4e65-9fdf-1bf853666575/ ########################################################################################################### # # create a VM of "medium" size and OS "Ubuntu 14.04": [prod-ui-02]# occi --endpoint https://egi-cloud.pd.infn.it:8787/ --auth x509 --voms --action create -r compute -M resource_tpl#medium -M os_tpl#29e5d9a0-9fed-44d8-96b7-5cacd35de31a --context user_data="file://$PWD/tmpfedcloud.login" --attribute occi.core.title="rOCCI-ubu" https://egi-cloud.pd.infn.it:8787/compute/4420527f-1283-4908-b7ad-455c820aacc8 # # assign a floating-ip to the VM: [prod-ui-02]# occi --endpoint https://egi-cloud.pd.infn.it:8787/ --auth x509 --voms --action link --resource /compute/4420527f-1283-4908-b7ad-455c820aacc8 --link /network/public # # discover the floating-ip assigned: [prod-ui-02]# occi --endpoint https://egi-cloud.pd.infn.it:8787/ --auth x509 --voms --action describe --resource /compute/4420527f-1283-4908-b7ad-455c820aacc8 ... occi.networkinterface.address = 90.147.77.226 occi.core.target = /network/public occi.core.source = /compute/4420527f-1283-4908-b7ad-455c820aacc8 occi.core.id = /network/interface/4ade17de-e867-4300-aba9-3fad19f7dff7 ... # # access the VM via ssh: [prod-ui-02]# ssh -i tmpfedcloud -p 22 cloudadm@90.147.77.226 Enter passphrase for key 'tmpfedcloud': Welcome to Ubuntu 14.04 ...
Install FedCloud BDII
- See the guide here
- Add EPEL repository according to the instructions at https://fedoraproject.org/wiki/EPEL
- Add the cloud-info-provider repository to yum and install the service (it includes the resource bdii):
[root@egi-cloud ~]# wget http://repository.egi.eu/community/software/cloud.info.provider/0.x/releases/repofiles/sl-6-x86_64.repo \ -O /etc/yum.repos.d/cloud-info-provider.repo [root@egi-cloud ~]# yum install cloud-info-provider-service
- Customize the configuration file with the local sites' infos
[root@egi-cloud ~]# sed -i 's|MySite|INFN-PADOVA-STACK|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|Testing|Production|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|http://www.cern.ch/gidinfo|http://www.pd.infn.it|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|Geneva, Switzerland|Padova, Italy|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_COUNTRY = Switzerland|SITE_COUNTRY = Italy|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_LAT = 0.0|SITE_LAT = 45.41|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_LONG = 0.0|SITE_LONG = 11.89|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_EMAIL = admin@domain.invalid|SITE_EMAIL = cloud-prod@lists.pd.infn.it|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_SECURITY_EMAIL = admin@domain.invalid|SITE_SECURITY_EMAIL = grid-sec@pd.infn.it|g' /etc/glite-info-static/site/site.cfg [root@egi-cloud ~]# sed -i 's|SITE_SUPPORT_EMAIL = admin@domain.invalid|SITE_SUPPORT_EMAIL = cloud-prod@lists.pd.infn.it|g' /etc/glite-info-static/site/site.cfg
- Use one of the template files in /etc/cloud-info-provider as basis for creating your own YAML file with the static information of your resources. E.g:
[root@egi-cloud ~]# cp /etc/cloud-info-provider/sample.openstack.yaml /opt/cloud-info-provider/etc/bdii.yaml
- Edit the /opt/cloud-info-provider/etc/bdii.yaml configuration, setting up the site permanent information and the OpenStack connection information. Most of the information to be provider is self explanatory or specified in the file comments
- Site name will be fetched from site → name in the template file. Set it to the name defined in GOCDB. Alternatively, the site name can be fetched from /etc/glite-info-static/site/site.cfg (or by the file set with the –glite-site-info-static option)
- Be sure that keystone contains the OCCI endpoint, otherwise it will not be published by the BDII:
[root@egi-cloud ~]# keystone service-list [root@egi-cloud ~]# keystone service-create --name nova-occi --type occi --description 'Nova OCCI Service' [root@egi-cloud ~]# keystone endpoint-create --service_id <the one obtained above> --region RegionOne --publicurl https://$HOSTNAME:8787/ --internalurl https://$HOSTNAME:8787/ --adminurl https://$HOSTNAME:8787/
- By default, the provider script will filter images without marketplace uri defined into the marketplace or vmcatcher_event_ad_mpuri property. If you want to list all the images templates (included local snapshots), set the variable 'require_marketplace_id: false' under 'compute' → 'images' → 'defaults' in the YAML configuration file.
- Create the file /var/lib/bdii/gip/provider/cloud-info-provider that calls the provider with the correct options for your site, for example:
#!/bin/sh cloud-info-provider-service --yaml /etc/cloud-info-provider/openstack.yaml \ --middleware openstack \ --os-username <username> --os-password <passwd> \ --os-tenant-name <tenant> --os-auth-url <url>
- Run manually the cloud-info-provider script and check that the output retunr the complete LDIF. To do so, execute:
[root@egi-cloud ~]# chmod +x /var/lib/bdii/gip/provider/cloud-info-provider [root@egi-cloud ~]# /var/lib/bdii/gip/provider/cloud-info-provider
- Now you can start the bdii service:
[root@egi-cloud ~]# service bdii start
- Use the command below to see if the information is being published:
[root@egi-cloud ~]# ldapsearch -x -h localhost -p 2170 -b o=glue
- Information on how to set up the site-BDII in egi-cloud-sbdii.pd.infn.it is available here
- Add your cloud-info-provider to your site-BDII egi-cloud-sbdii.pd.infn.it by adding new lines in the site.def like this:
BDII_REGIONS="CLOUD BDII" BDII_CLOUD_URL="ldap://egi-cloud.pd.infn.it:2170/GLUE2GroupID=cloud,o=glue" BDII_BDII_URL="ldap://egi-cloud-sbdii.pd.infn.it:2170/mds-vo-name=resource,o=grid"
Install vmcatcher/glancepush
- VMcatcher allows users to subscribe to virtual machine Virtual Machine image lists, cache the images referenced to in the Virtual Machine Image List, validate the images list with x509 based public key cryptography, and validate the images against sha512 hashes in the images lists and provide events for further applications to process updates or expiries of virtual machine images without having to further validate the images (see this guide).
[root@egi-cloud ~]# useradd stack [root@egi-cloud ~]# cat << EOF > /etc/yum.repos.d/yokel.repo [yokel_scientific_release_6] name=yokel_scientific_release_6 baseurl=http://www.yokel.org/pub/software/yokel.org/scientific/6/release/x86_64/rpm/ enabled=1 gpgcheck=0 EOF [root@egi-cloud ~]# yum install vmcatcher gpvcmupdate python-glancepush [root@egi-cloud ~]# sed -i 's|temp_dir = "/tmp/"|temp_dir = "/opt/stack/vmcatcher/tmp/"|g' /usr/bin/gpvcmupdate.py # use gluster storage for caching images and tmp files [root@egi-cloud ~]# ln -fs /var/lib/swift/vmcatcher /opt/stack/ # [root@egi-cloud ~]# mkdir -p /opt/stack/vmcatcher/cache /opt/stack/vmcatcher/cache/partial /opt/stack/vmcatcher/cache/expired /opt/stack/vmcatcher/tmp [root@egi-cloud ~]# chown stack:stack /opt/stack/vmcatcher/cache /opt/stack/vmcatcher/cache/partial /opt/stack/vmcatcher/cache/expired /opt/stack/vmcatcher/tmp [root@egi-cloud ~]# mkdir -p /var/spool/glancepush /var/log/glancepush/ /etc/glancepush /etc/glancepush/transform /etc/glancepush/meta /etc/glancepush/test /etc/glancepush/clouds [root@egi-cloud ~]# cp /etc/keystone/voms.json /etc/glancepush/ [root@egi-cloud ~]# chown stack:stack -R /var/spool/glancepush /etc/glancepush /var/log/glancepush/
- Now for each VO/tenant you have in voms.json write a file like this:
[root@egi-cloud ~]# su - stack [stack@egi-cloud ~]# cat << EOF > /etc/glancepush/clouds/dteam [general] # Tenant for this VO. Must match the tenant defined in voms.json file testing_tenant=EGI_dteam # Identity service endpoint (Keystone) endpoint_url=https://egi-cloud.pd.infn.it:35357/v2.0 # User Password password=xxxxx # User username=admin # Set this to true if you're NOT using self-signed certificates is_secure=True # SSH private key that will be used to perform policy checks (to be done) #ssh_key=openstack.key # WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems #cacert=path_to_your_cert EOF
- and for images not belonging to any VO use the admin tenant
[stack@egi-cloud ~]# cat << EOF > /etc/glancepush/clouds/openstack [general] # Tenant for this VO. Must match the tenant defined in voms.json file testing_tenant=admin # Identity service endpoint (Keystone) endpoint_url=https://egi-cloud.pd.infn.it:35357/v2.0 # User Password password=xxxxx # User username=admin # Set this to true if you're NOT using self-signed certificates is_secure=True # SSH private key that will be used to perform policy checks (to be done) #ssh_key=openstack.key # WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems #cacert=path_to_your_cert EOF
- Check that vmcatcher is running properly by listing and subscribing to an image list
[stack@egi-cloud ~]# export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db" [stack@egi-cloud ~]# vmcatcher_subscribe -l [stack@egi-cloud ~]# vmcatcher_subscribe -e -s https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list [stack@egi-cloud ~]# vmcatcher_subscribe -l 8ddbd4f6-fb95-4917-b105-c89b5df99dda True None https://vmcaster.appdb.egi.eu/store/vappliance/tinycorelinux/image.list
- Create a CRON wrapper for vmcatcher, named $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh, using the following code:
#!/bin/bash #Cron handler for VMCatcher image syncronization script for OpenStack #Vmcatcher configuration variables export VMCATCHER_RDBMS="sqlite:////opt/stack/vmcatcher/vmcatcher.db" export VMCATCHER_CACHE_DIR_CACHE="/opt/stack/vmcatcher/cache" export VMCATCHER_CACHE_DIR_DOWNLOAD="/opt/stack/vmcatcher/cache/partial" export VMCATCHER_CACHE_DIR_EXPIRE="/opt/stack/vmcatcher/cache/expired" export VMCATCHER_CACHE_EVENT="python /usr/bin/gpvcmupdate.py -D" #Update vmcatcher image lists vmcatcher_subscribe -U #Add all the new images to the cache for a in `vmcatcher_image -l | awk '{if ($2==2) print $1}'`; do vmcatcher_image -a -u $a done #Update the cache vmcatcher_cache -v -v #Run glancepush python /usr/bin/python-glancepush.py
- Test that the vmcatcher handler is working correctly by running:
[stack@egi-cloud ~]# chmod +x $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh [stack@egi-cloud ~]# $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh
- Add the following line to the stack user crontab:
50 */6 * * * $HOME/gpvcmupdate/vmcatcher_eventHndl_OS_cron.sh >> /var/log/glancepush/vmcatcher.log 2>&1
- Useful links for getting VO-wide image lists that need authentication to AppDB: Vmcatcher setup, Obtaining an access token,Image list store.
Install APEL/SSM
# wget rpms from http://apel.github.io/apel/rpms/SL6/ [root@egi-cloud ~]# useradd apel [root@egi-cloud ~]# yum localinstall apel-ssm-2.1.1-0.el6.noarch.rpm apel-client-1.1.3-0.el6.noarch.rpm apel-lib-1.1.3-0.el6.noarch.rpm [root@egi-cloud ~]# wget ftp://ftp.in2p3.fr/ccin2p3/egi-acct-osdriver/apel-ssm-openstack/apel-ssm-openstack-latest.noarch.rpm [root@egi-cloud ~]# yum localinstall apel-ssm-openstack-latest.noarch.rpm [root@egi-cloud ~]# mkdir /etc/grid-security/apel [root@egi-cloud ~]# cp /etc/grid-security/host*.pem /etc/grid-security/apel/ [root@egi-cloud ~]# chown -R apel.apel /etc/grid-security/apel/ [root@egi-cloud ~]# chown apel.apel /var/spool/apel/ [root@egi-cloud ~]# chown apel.apel /var/spool/osssm/
- create from the OpenStack dashboard the user "accounting" with role "admin", and add it to both EGI_FCTF and EGI_ops tenants
- change files /etc/apel/sender.cfg and /etc/osssmrc according to the instructions
[root@egi-cloud ~]# sed -i 's|destination:|destination:/queue/global.accounting.test.cloud.central|g' /etc/apel/sender.cfg [root@egi-cloud ~]# sed -i 's|/etc/grid-security/hostcert.pem|/etc/grid-security/apel/hostcert.pem|g' /etc/apel/sender.cfg [root@egi-cloud ~]# sed -i 's|/etc/grid-security/hostkey.pem|/etc/grid-security/apel/hostkey.pem|g' /etc/apel/sender.cfg # this below is a temporary workaroud (27/5/2014) [root@egi-cloud ~]# sed -i 's|use_ssl: true|use_ssl: false|g' /etc/apel/sender.cfg # [root@egi-cloud ~]# sed -i 's|keystone_api_url = http://###KEYSTONE_HOSTNAME###:###PORT###/v2.0|keystone_api_url = https://egi-cloud.pd.infn.it:5000/v2.0|g' /etc/osssmrc [root@egi-cloud ~]# sed -i 's|user = ###USER###|user = accounting|g' /etc/osssmrc [root@egi-cloud ~]# sed -i 's|password = ###PASSWORD###|password = <put the password here>|g' /etc/osssmrc [root@egi-cloud ~]# sed -i 's|tenants = ###TENANT_NAME_LIST###|tenants = EGI_FCTF,EGI_ops|g' /etc/osssmrc [root@egi-cloud ~]# sed -i 's|gocdb_sitename = ###SITE_NAME###|gocdb_sitename = INFN-PADOVA-STACK|g' /etc/osssmrc [root@egi-cloud ~]# sed -i 's|ssm_input_path = /opt/apel/ssm/messages/outgoing/openstack|ssm_input_path = /var/spool/apel/outgoing|g' /etc/osssmrc
- apply the fix described here to send VO info to the accounting portal
<code_path> [root@egi-cloud ~]# sed -i "s|'FQAN': nullValue,|'FQAN': vo,|g" /usr/share/pyshared/osssm.py </code>
- records' extraction and pushing to GOC Accounting are controlled in the cron file /var/lib/osssm/cron
- create or destroy some VMs, then check that files in /var/spool/apel/outgoing/ and /var/spool/osssm are created as expected:
<code_path> [root@egi-cloud ~]# su - apel [apel@egi-cloud ~]$ /usr/bin/osssm.extract [apel@egi-cloud ~]$ ll /var/spool/osssm/ total 4 -rw-rw-r– 1 apel apel 17848 May 27 12:33 servers -rw-rw-r– 1 apel apel 0 May 27 12:32 timestamp [apel@egi-cloud ~]$ /usr/bin/osssm.push [apel@egi-cloud ~]$ ll /var/spool/apel/outgoing/ total 8 drwxrwxr-x 2 apel apel 4096 May 27 12:17 5384643c drwxrwxr-x 2 apel apel 4096 May 27 12:34 538469dc
</code>
- execute the command for sending the accounting data to GOCDB. After that, enable it as cron job in /var/lib/osssm/cron with the desired periodicity
[apel@egi-cloud ~]$ ssmsend 2014-05-27 12:34:29,730 - ssmsend - INFO - ======================================== 2014-05-27 12:34:29,731 - ssmsend - INFO - Starting sending SSM version 2.1.1. 2014-05-27 12:34:29,731 - ssmsend - INFO - Retrieving broker details from ldap://lcg-bdii.cern.ch:2170 ... 2014-05-27 12:34:30,024 - ssmsend - INFO - Found 2 brokers. 2014-05-27 12:34:30,024 - ssmsend - INFO - No server certificate supplied. Will not encrypt messages. 2014-05-27 12:34:30,066 - stomp.py - INFO - Established connection to host mq.cro-ngi.hr, port 6163 2014-05-27 12:34:30,094 - ssm.ssm2 - INFO - Connected. 2014-05-27 12:34:30,094 - ssm.ssm2 - INFO - Will send messages to: /queue/global.accounting.test.cloud.central 2014-05-27 12:34:30,098 - ssm.ssm2 - INFO - Found 1 messages. 2014-05-27 12:34:30,098 - ssm.ssm2 - INFO - Sending message: 538469dc/538469fdc8aea0 2014-05-27 12:34:30,108 - ssm.ssm2 - INFO - Waiting for broker to accept message. 2014-05-27 12:34:30,157 - ssm.ssm2 - INFO - Broker received message: 538469dc/538469fdc8aea0 2014-05-27 12:34:30,209 - ssmsend - INFO - SSM run has finished. 2014-05-27 12:34:30,209 - ssm.ssm2 - INFO - SSM connection ended. 2014-05-27 12:34:30,209 - ssmsend - INFO - SSM has shut down. 2014-05-27 12:34:30,209 - ssmsend - INFO - ======================================== # [root@egi-cloud ~]# chkconfig osssm on [root@egi-cloud ~]# service osssm start
Install the new accounting system (CASO)
[root@egi-cloud ~]# yum groupinstall "Development tools" [root@egi-cloud ~]# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel [root@egi-cloud ~]# echo "/usr/local/lib" >> /etc/ld.so.conf; /sbin/ldconfig [root@egi-cloud ~]# wget http://python.org/ftp/python/2.7.9/Python-2.7.9.tar.xz [root@egi-cloud ~]# tar xf Python-2.7.9.tar.xz; cd Python-2.7.9 [root@egi-cloud ~]# ./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib" [root@egi-cloud ~]# make && make altinstall
- Download and install Setuptools + pip:
[root@egi-cloud ~]# wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py [root@egi-cloud ~]# python2.7 ez_setup.py [root@egi-cloud ~]# easy_install-2.7 pip
- Now install virtualenvrapper for Python 2.7:
[root@egi-cloud ~]$ pip2.7 install virtualenvwrapper [root@egi-cloud ~]$ cat >> .bashrc << EOF export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python2.7 export USR_BIN=$(dirname $(which virtualenv)) if [ -f $USR_BIN/virtualenvwrapper.sh ]; then source $USR_BIN/virtualenvwrapper.sh else if [ -f /usr/bin/virtualenvwrapper.sh ]; then source /usr/bin/local/virtualenvwrapper.sh else echo "Can't find a virtualenv wrapper installation" fi fi EOF [root@egi-cloud ~]$ source .bashrc
- And now install CASO:
[root@egi-cloud ~]$ mkvirtualenv caso [root@egi-cloud ~]$ pip install caso
- Copy the CA certs bundle in the right place
[root@egi-cloud ~]# cd /root/.virtualenvs/caso/lib/python2.7/site-packages/requests/ [root@egi-cloud ~]# cp /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem . ; cp /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt . [root@egi-cloud ~]# mv cacert.pem cacert.pem.bak; ln -s tls-ca-bundle.pem cacert.pem;
- Configure /etc/caso/caso.conf according to the documentation and test if everything works:
[root@egi-cloud ~]$ mkdir /var/spool/caso; mkdir /var/spool/apel/outgoing/openstack [root@egi-cloud ~]$ workon caso [root@egi-cloud ~]$ caso-extract -v -d
- Create the cron job
[root@egi-cloud ~]# cat /etc/cron.d/caso # extract and send usage records to APEL/SSM 10 * * * * root /root/.virtualenvs/caso/bin/caso-extract; chown -R apel.apel /var/spool/apel/outgoing/openstack/ # send buffered usage records to GOC 30 */24 * * * apel /usr/bin/ssmsend
Troubleshooting
- In order to allow cld-nagios to access egi-cloud.local, cloud-01.local, cloud-05.local and gilda-11.local, add the following routing rules in all servers:
[root@egi-cloud ~]# echo "192.168.60.32 via 192.168.115.1" >>/etc/sysconfig/network-scripts/route-eth0.19 [root@cloud-01,05,gilda-11 ~]# echo "192.168.60.32 via 192.168.115.1" >>/etc/sysconfig/network-scripts/route-eth1
- Also allow passwordless ssh access to egi-cloud from cld-nagios:
[root@cld-nagios ~]# ssh-keygen -t rsa [root@cld-nagios ~]# ssh-copy-id egi-cloud.local <code> * In case of Nagios alarms, try the following: <code bash> $ ssh root@egi-cloud [root@egi-cloud ~]# ./restart-services-ctlnet.sh [root@egi-cloud ~]# for i in cloud-01.local cloud-05.local gilda-11.local; do ssh $i ./restart-service-cmp.sh; done
- Resubmit the Nagios probe and check if it works again