User Tools

Site Tools


Sidebar

progetti:cloud-areapd:egi_federated_cloud:newton-centos7_testbed

Newton-CentOS7 Testbed

Fully integrated Resource Provider INFN-PADOVA-STACK in production since 3 August 2017.

EGI Monitoring/Accounting

Local Monitoring/Accounting

Local dashboard

Layout

  • Controller + Network node + Storage node + Telemetry service: egi-cloud.pd.infn.it
  • Compute nodes: cloud-01:06.pn.pd.infn.it
  • NoSQL database: egi-cloud-ha.pn.pd.infn.it
  • OneData provider: one-data-01.pd.infn.it
  • Cloudkeeper and Cloudkeeper-OS: egi-cloud-ha.pd.infn.it
  • Network layout available here (authorized users only)

OpenStack configuration

  • Controller/Network node and Compute nodes were installed according to OpenStack official documentation
  • We created one tenant for each EGI FedCloud VO supported, a router and various nets and subnets obtaining the following network topology:

  • We mount the partitions for the glance and cinder services form 192.168.61.100 with nfs driver
yum install -y nfs-utils
mkdir -p /var/lib/glance/images
cat<<EOF>>/etc/fstab
192.168.61.100:/glance-egi /var/lib/glance/images     nfs defaults      
EOF
mount -a
  • We use some specific configurations for cinder services using the following documentation cinder with NFS backend.
  • The telemetry service uses a NoSQL database then we install mongodb on cld-mongo-egi.cloud.pd.infn.it

EGI FedCloud specific configuration

(see EGI Doc)

  • Install CAs Certificates and the software for fetching the CRLs in both Controller (egi-cloud) and Compute (cloud-01:06) nodes:
systemctl stop httpd
curl -L http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo | sudo tee /etc/yum.repos.d/EGI-trustanchors.repo
yum install -y ca-policy-egi-core fetch-crl
systemctl enable fetch-crl-cron.service
systemctl start fetch-crl-cron.service

Install OpenStack Keystone-VOMS module

(see Keystone-voms doc)

  • Prepare to run keystone as WSGI app in SSL
yum install -y voms mod_ssl
 
APACHE_LOG_DIR=/var/log/httpd
 
cat <<EOF>/etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
WSGIDaemonProcess keystone user=keystone group=keystone processes=8 threads=1
<VirtualHost _default_:5000>
    LogLevel     warn
    ErrorLog    /var/log/httpd/error.log
    CustomLog   /var/log/httpd/ssl_access.log combined
 
    SSLEngine               on
    SSLCertificateFile      /etc/grid-security/hostcert.pem
    SSLCertificateKeyFile   /etc/grid-security/hostkey.pem
    SSLCACertificatePath    /etc/grid-security/certificates
    SSLCARevocationPath     /etc/grid-security/certificates
    SSLVerifyClient         optional
    SSLVerifyDepth          10
    SSLProtocol             all -SSLv2
    SSLCipherSuite          ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
    SSLOptions              +StdEnvVars +ExportCertData
 
    WSGIScriptAlias /  /var/www/cgi-bin/keystone/main
    WSGIProcessGroup keystone
</VirtualHost>
 
Listen 35357
WSGIDaemonProcess   keystoneapi user=keystone group=keystone processes=8 threads=1
<VirtualHost _default_:35357>
    LogLevel    warn
    ErrorLog    /var/log/httpd/error.log
    CustomLog   /var/log/httpd/ssl_access.log combined
 
    SSLEngine               on
    SSLCertificateFile      /etc/grid-security/hostcert.pem
    SSLCertificateKeyFile   /etc/grid-security/hostkey.pem
    SSLCACertificatePath    /etc/grid-security/certificates
    SSLCARevocationPath     /etc/grid-security/certificates
    SSLVerifyClient         optional
    SSLVerifyDepth          10
    SSLProtocol             all -SSLv2
    SSLCipherSuite          ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
    SSLOptions              +StdEnvVars +ExportCertData
 
    WSGIScriptAlias     / /var/www/cgi-bin/keystone/admin
    WSGIProcessGroup    keystoneapi
</VirtualHost>
EOF
  • Check and in case install the host certificate for your server in /etc/grid-security/ directory:
[root@egi-cloud]# ls -l /etc/grid-security/host*
-rw-r--r--.  1 root root  2021 Sep  8 18:35 hostcert.pem
-rw-------.  1 root root  1675 Sep  8 18:35 hostkey.pem
  • take the file keystone.py
  • copy it to /usr/lib/cgi-bin/keystone/keystone.py and create the following links:
echo "OPENSSL_ALLOW_PROXY_CERTS=1" >> /etc/sysconfig/httpd
mkdir -p /var/www/cgi-bin/keystone
curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/newton | tee /var/www/cgi-bin/keystone/keystone.py
ln /var/www/cgi-bin/keystone/keystone.py /var/www/cgi-bin/keystone/main
ln /var/www/cgi-bin/keystone/keystone.py /var/www/cgi-bin/keystone/admin
chown -R keystone:keystone /var/www/cgi-bin/keystone
  • Installing the Keystone-VOMS module:
yum localinstall -y http://repository.egi.eu/community/software/keystone.voms/stable-newton/releases/centos/7/x86_64/RPMS/python-keystone_voms-10.0.0-1.el7.centos.noarch.rpm
  • Enable the Keystone VOMS module
cat<<EOF>>/etc/keystone/keystone-paste.ini
 
[filter:voms]
paste.filter_factory = keystone_voms.core:VomsAuthNMiddleware.factory
EOF
 
sed -i 's|ec2_extension public_service|voms ec2_extension public_service|' /etc/keystone/keystone-paste.ini
  • Configuring the Keystone VOMS module
cat<<EOF >> /etc/keystone/keystone.conf
 
[voms]
vomsdir_path = /etc/grid-security/vomsdir
ca_path = /etc/grid-security/certificates
voms_policy = /etc/keystone/voms.json
vomsapi_lib = libvomsapi.so.1
autocreate_users = True
add_roles = False
user_roles = _member_
enable_pusp = False
EOF
mkdir -p /etc/grid-security/vomsdir/fedcloud.egi.eu
cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms1.egee.cesnet.cz.lsc << EOF
/DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms1.egee.cesnet.cz
/C=NL/O=TERENA/CN=TERENA eScience SSL CA
EOF
cat > /etc/grid-security/vomsdir/fedcloud.egi.eu/voms2.grid.cesnet.cz.lsc << EOF
/DC=org/DC=terena/DC=tcs/OU=Domain Control Validated/CN=voms2.grid.cesnet.cz
/C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 2
EOF
mkdir -p /etc/grid-security/vomsdir/dteam
cat > /etc/grid-security/vomsdir/dteam/voms.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2016
EOF
cat > /etc/grid-security/vomsdir/dteam/voms2.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2016
EOF
mkdir -p /etc/grid-security/vomsdir/enmr.eu
cat > /etc/grid-security/vomsdir/enmr.eu/voms2.cnaf.infn.it.lsc <<EOF
/C=IT/O=INFN/OU=Host/L=CNAF/CN=voms2.cnaf.infn.it
/C=IT/O=INFN/CN=INFN Certification Authority
EOF
cat > /etc/grid-security/vomsdir/enmr.eu/voms-02.pd.infn.it.lsc <<EOF
/C=IT/O=INFN/OU=Host/L=Padova/CN=voms-02.pd.infn.it
/C=IT/O=INFN/CN=INFN INFN Certification Authority
EOF
mkdir -p /etc/grid-security/vomsdir/vo.indigo-datacloud.eu
cat > /etc/grid-security/vomsdir/vo.indigo-datacloud.eu/voms01.ncg.ingrid.pt.lsc <<EOF
/C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt
/C=PT/O=LIPCA/CN=LIP Certification Authority
EOF
mkdir -p /etc/grid-security/vomsdir/emsodev
cat > /etc/grid-security/vomsdir/emsodev/voms.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2016
EOF
cat > /etc/grid-security/vomsdir/emsodev/voms2.hellasgrid.gr.lsc << EOF
/C=GR/O=HellasGrid/OU=hellasgrid.gr/CN=voms2.hellasgrid.gr
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2016
EOF
for i in ops atlas lhcb cms
do
mkdir -p /etc/grid-security/vomsdir/$i
cat > /etc/grid-security/vomsdir/$i/lcg-voms2.cern.ch.lsc << EOF
/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch
/DC=ch/DC=cern/CN=CERN Grid Certification Authority
EOF
cat > /etc/grid-security/vomsdir/$i/voms2.cern.ch.lsc << EOF
/DC=ch/DC=cern/OU=computers/CN=voms2.cern.ch
/DC=ch/DC=cern/CN=CERN Grid Certification Authority
EOF
done
cat <<EOF>/etc/keystone/voms.json
{
 "vo.indigo-datacloud.eu": { 
 "tenant": "indigo"
 },
 "fedcloud.egi.eu": {
 "tenant": "fctf"
 },
 "ops": {
 "tenant": "ops"
 },
 "enmr.eu": {
 "tenant": "wenmr"
 },
 "dteam": {
 "tenant": "dteam"
 },
 "atlas": {
 "tenant": "atlas"
 },
 "lhcb": {
 "tenant": "lhcb"
 },
 "cms": {
 "tenant": "cms"
 },
 "vo.emsodev.eu": {
 "tenant": "emsodev"
 }
}
EOF
mysql> use keystone;
mysql> update endpoint set url="https://egi-cloud.pd.infn.it:5000/v2.0" where url="http://egi-cloud.pd.infn.it:5000/v2.0";
mysql> update endpoint set url="https://egi-cloud.pd.infn.it:35357/v2.0" where url="http://egi-cloud.pd.infn.it:35357/v2.0";
mysql> select id,url from endpoint;
should show lines with the above URLs.
  • Replace http with https in auth_[protocol,uri,url] variables and IP address with egi-cloud.pd.infn.it in auth_[host,uri,url] in /etc/nova/nova.conf, /etc/nova/api-paste.ini, /etc/neutron/neutron.conf, /etc/neutron/api-paste.ini, /etc/neutron/metadata_agent.ini, /etc/cinder/cinder.conf, /etc/cinder/api-paste.ini, /etc/glance/glance-api.conf, /etc/glance/glance-registry.conf, /etc/glance/glance-cache.conf and any other service that needs to check keystone tokens, and then restart the services of the Controller node
  • Replace http with https in auth_[protocol,uri,url] variables and IP address with egi-cloud.pd.infn.it in auth_[host,uri,url] in /etc/nova/nova.conf and /etc/neutron/neutron.conf and restart the services openstack-nova-compute and neutron-openvswitch-agent of the Compute nodes.
  • Also check if "cafile" variable has INFN-CA-2015.pem in all service configuration files and admin-openrc.sh file.

Install the OOI API

yum localinstall -y http://repository.egi.eu/community/software/ooi/occi-1.2/releases/centos/7/x86_64/RPMS/python-ooi-1.1.2-1.el7.centos.noarch.rpm
  • Edit the /etc/nova/api-paste.ini file
cat <<EOF >>/etc/nova/api-paste.ini
 
#######
# OOI #
#######
 
[composite:ooi]
use = call:nova.api.openstack.urlmap:urlmap_factory
/occi1.2: occi_api_12
/occi1.1: occi_api_12
 
[filter:occi]
paste.filter_factory = ooi.wsgi:OCCIMiddleware.factory
openstack_version = /v2.1
 
[composite:occi_api_12]
use = call:nova.api.auth:pipeline_factory_v21
noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit noauth2 occi osapi_compute_app_v21
keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit authtoken keystonecontext occi osapi_compute_app_v21
EOF
  • Make sure the API occiapi is enabled in the /etc/nova/nova.conf configuration file:
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata,ooi
openstack-config --set /etc/nova/nova.conf DEFAULT ooi_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT ooi_listen_port 9000
openstack-config --set /etc/nova/nova.conf DEFAULT default_floating_pool ext-net
  • Restart the nova services:
systemctl restart openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
  • Register service in Keystone:
openstack service create --name occi --description "OCCI Interface" occi
openstack endpoint create --region RegionOne occi public https://egi-cloud.pd.infn.it:8787/occi1.1
openstack endpoint create --region RegionOne occi internal https://egi-cloud.pd.infn.it:8787/occi1.1
openstack endpoint create --region RegionOne occi admin https://egi-cloud.pd.infn.it:8787/occi1.1
  • Enable SSL connection on port 8787, by creating the file /etc/httpd/conf.d/ooi.conf
cat <<EOF > /etc/httpd/conf.d/ooi.conf
#LoadModule proxy_http_module modules/mod_proxy_http.so
#
# Proxy Server directives. Uncomment the following lines to
# enable the proxy server:
#LoadModule proxy_module /usr/lib64/httpd/modules/mod_proxy.so
#LoadModule proxy_http_module /usr/lib64/httpd/modules/mod_proxy_http.so
#LoadModule substitute_module /usr/lib64/httpd/modules/mod_substitute.so
 
 
Listen 8787 
<VirtualHost _default_:8787>
 LogLevel debug
 ErrorLog /var/log/httpd/ooi-error.log 
 CustomLog /var/log/httpd/ooi-ssl_access.log combined 
 
 SSLEngine                  on 
 SSLCertificateFile         /etc/grid-security/hostcert.pem 
 SSLCertificateKeyFile      /etc/grid-security/hostkey.pem 
 SSLCACertificatePath       /etc/grid-security/certificates
 SSLCARevocationPath        /etc/grid-security/certificates
 SSLVerifyClient            optional 
 SSLVerifyDepth             10 
 SSLProtocol                all -SSLv2
 SSLCipherSuite             ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW 
 SSLOptions                 +StdEnvVars +ExportCertData 
<IfModule mod_proxy.c> 
# Do not enable proxying with ProxyRequests until you have secured 
# your server. 
# Open proxy servers are dangerous both to your network and to the 
# Internet at large. 
 ProxyRequests Off 
 
 <Proxy *> 
 Order deny,allow 
 Deny from all
 </Proxy> 
 
 ProxyPass / http://egi-cloud.pd.infn.it:9000/
 ProxyPassReverse / http://egi-cloud.pd.infn.it:9000/
 <Location /> 
   AddOutputFilterByType SUBSTITUTE text/plain text text/uri-list
   Substitute s|http://egi-cloud.pd.infn.it:9000/|https://egi-cloud.pd.infn.it:8787/|n 
   Order allow,deny
   Allow from all
 </Location> 
 
</IfModule> 
</VirtualHost>
EOF
  • Restart http service
systemctl restart httpd

Install rOCCI Client

For complete guide about the rOCCI Client see How to use the rOCCI Client.

Install FedCloud BDII

(See EGI guide and BDII onfiguration guide)

  • Installing the resource bdii and the cloud-info-provider:
yum install bdii -y
yum -y localinstall http://repository.egi.eu/community/software/cloud.info.provider/0.x/releases/centos/7/x86_64/RPMS/cloud-info-provider-0.8.3-1.el7.centos.noarch.rpm
  • Customize the configuration file with the local sites' infos
cp /etc/cloud-info-provider/sample.openstack.yaml /etc/cloud-info-provider/bdii.yaml
sed -i 's|#name: SITE_NAME|name: INFN-PADOVA-STACK|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#production_level: production|production_level: production|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#url: http://site.url.example.org/|#url: http://www.pd.infn.it|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#country: ES|country: IT|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#ngi: NGI_FOO|ngi: NGI_IT|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#latitude: 0.0|latitude: 45.41|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#longitude: 0.0|longitude: 11.89|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#general_contact: general-support@example.org|general_contact: cloud-prod@lists.pd.infn.it|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#security_contact: security-support@example.org|security_contact:  grid-sec@pd.infn.it|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|#user_support_contact: user-support@example.org|user_support_contact: cloud-prod@lists.pd.infn.it|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|total_cores: 0|total_cores: 120|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|total_ram: 0|total_ram: 240|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|hypervisor: Foo Hypervisor|hypervisor: KVM Hypervisor|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|hypervisor_version: 0.0.0|hypervisor_version: 2.0.0|g' /etc/cloud-info-provider/bdii.yaml
sed -i 's|middleware_version: havana|middleware_version: Newton|g' /etc/cloud-info-provider/bdii.yaml
  • Be sure that keystone contains the OOI endpoints, otherwise it will not be published by the BDII.
  • Create the file /var/lib/bdii/gip/provider/cloud-info-provider that calls the provider with the correct options for your site, for example:
cat<<EOF>/var/lib/bdii/gip/provider/cloud-info-provider
#!/bin/sh
cloud-info-provider-service --yaml /etc/cloud-info-provider/bdii.yaml \
                            --middleware openstack \
                            --os-username admin --os-password ADMIN_PASS \
                            --os-tenant-name admin --os-auth-url https://egi-cloud.pd.infn.it:35357/v2.0 \
                            --os-cacert /etc/grid-security/certificates/INFN-CA-2015.pem
EOF
  • Run manually the cloud-info-provider script and check that the output return the complete LDIF. To do so, execute:
chmod +x /var/lib/bdii/gip/provider/cloud-info-provider
/var/lib/bdii/gip/provider/cloud-info-provider
/sbin/chkconfig bdii on
  • Now you can start the bdii service:
systemctl start bdii
  • Use the command below to see if the information is being published:
ldapsearch -x -h localhost -p 2170 -b o=glue
  • Do not forget to open port 2170:
firewall-cmd --add-port=2170/tcp
firewall-cmd --permanent --add-port=2170/tcp
systemctl restart firewalld
  • Information on how to set up the site-BDII in egi-cloud-sbdii.pd.infn.it is available here
  • Add your cloud-info-provider to your site-BDII egi-cloud-sbdii.pd.infn.it by adding new lines in the site.def like this:
BDII_REGIONS="CLOUD BDII"
BDII_CLOUD_URL="ldap://egi-cloud.pd.infn.it:2170/GLUE2GroupID=cloud,o=glue"
BDII_BDII_URL="ldap://egi-cloud-sbdii.pd.infn.it:2170/mds-vo-name=resource,o=grid"

Use the same APEL/SSM of grid site

  • Cloud usage records are sent to APEL through the ssmsend program installed in cert-37.pd.infn.it:
[root@cert-37 ~]# cat /etc/cron.d/ssm-cloud 
# send buffered usage records to APEL
30 */24 * * * root /usr/bin/ssmsend -c /etc/apel/sender-cloud.cfg
  • It si therefore neede to install and configure NFS on egi-cloud:
[root@egi-cloud ~]# mkdir -p /var/spool/apel/outgoing/openstack
[root@egi-cloud ~]# cat<<EOF>>/etc/exports 
/var/spool/apel/outgoing/openstack cert-37.pd.infn.it(rw,sync)
EOF
[root@egi-cloud ~]$ systemctl status nfs-server
  • In case of APEL nagios probe failure, check if /var/spool/apel/outgoing/openstack is properly mounted by cert-37
  • To check if accounting records are properly received by APEL server look at this site

Install the new accounting system (CASO)

(see CASO installation guide )

yum -y install libffi-devel openssl-devel gcc
yum -y localinstall http://repository.egi.eu/community/software/caso/1.x/releases/centos/7/x86_64/RPMS/caso-1.1.1-1.el7.centos.noarch.rpm
  • Create role and user
openstack user create --domain default --password ACCOUNTING_PASS accounting
openstack role create accounting 
  • For each of the tenants, add the user with the accounting role
for i in fctf wenmr atlas ops dteam lhcb cms indigo emsodev
do
openstack role add --project $i --user accounting accounting 
done
  • Edit the /etc/caso/caso.conf file
openstack-config --set /etc/caso/caso.conf DEFAULT extractor nova
openstack-config --set /etc/caso/caso.conf DEFAULT site_name INFN-PADOVA-STACK
openstack-config --set /etc/caso/caso.conf DEFAULT projects fctf,wenmr,atlas,ops,dteam,lhcb,cms,indigo,emsodev,biomed
openstack-config --set /etc/caso/caso.conf DEFAULT messengers caso.messenger.ssm.SsmMessager
openstack-config --set /etc/caso/caso.conf DEFAULT log_dir /var/log/caso
openstack-config --set /etc/caso/caso.conf DEFAULT log_file caso.log
openstack-config --set /etc/caso/caso.conf DEFAULT mapping_file /etc/keystone/voms.json
openstack-config --set /etc/caso/caso.conf keystone_auth auth_type password
openstack-config --set /etc/caso/caso.conf keystone_auth username accounting
openstack-config --set /etc/caso/caso.conf keystone_auth password ACCOUNTING_PASS
openstack-config --set /etc/caso/caso.conf keystone_auth auth_url https://egi-cloud.pd.infn.it:35357/v2.0
openstack-config --set /etc/caso/caso.conf keystone_auth cafile /etc/grid-security/certificates/INFN-CA-2015.pem
openstack-config --set /etc/caso/caso.conf ssm output_path /var/spool/apel/outgoing/openstack
openstack-config --set /etc/caso/caso.conf logstash host egi-cloud.pd.infn.it
openstack-config --set /etc/caso/caso.conf logstash port 5000
  • Edit the /etc/keystone/policy.json file
sed -i 's|\"admin_required\": \"role:admin or is_admin:1\",|\"admin_required\": \"role:admin or is_admin:1 or role:accounting\",|g' /etc/keystone/policy.json
mkdir /var/spool/caso /var/log/caso
  • Test it
caso-extract -v -d
  • Create the cron job
cat <<EOF>/etc/cron.d/caso 
# extract and send usage records to APEL/SSM 
10 * * * * root /usr/bin/caso-extract >> /var/log/caso/caso.log 2>&1 ; chmod go+w -R /var/spool/apel/outgoing/openstack/
EOF

Install Cloudkeeper and Cloudkeeper-OS

Cloudkeeper and Cloudkeeper-OS are installed in a dedicated server (egi-cloud-ha.pn.pd.infn.it). Install Cloudkeeper

yum localinstall -y http://repository.egi.eu/community/software/cloudkeeper/1.5.x/releases/centos/7/x86_64/RPMS/cloudkeeper-1.5.0+20170710170557-1.el7.x86_64.rpm

Edit /etc/cloudkeeper/cloudkeeper.yml with the list of VO-image-list and the controller IP

  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/fedcloud.egi.eu/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/vo.indigo-datacloud.eu/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/ops/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/enmr.eu/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/atlas/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/lhcb/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/cms/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/vo.emsodev.eu/image.list
  - https://PERSONAL_ID:x-oauth-basic@vmcaster.appdb.egi.eu/store/vo/biomed/image.list
 
    ip-address: CONTROLLER_IP # IP address NGINX can listen on

Enable and start the service

systemctl enable cloudkeeper-cron
systemctl start cloudkeeper-cron

Install Cloudkeeper-OS

cd /etc/yum.repos.d/
wget http://grand-est.fr/resources/software/cloudkeeper-os/repofiles/centos7/cloudkeeper-os.repo
cd
yum update
yum -y install cloudkeeper-os

Create a cloudkeeper user in keystone

openstack user create --domain default --password CLOUDKEEPER_PASS cloudkeeper

and, for each of the tenants, add the cloudkeeper user with the user role

for i in fctf wenmr atlas ops dteam lhcb cms indigo emsodev biomed
do
openstack role add --project $i --user cloudkeeper user 
done

Edit the etc/cloudkeeper-os/cloudkeeper-os.conf file

openstack-config --set /etc/cloudkeeper-os/cloudkeeper-os.conf keystone_authtoken auth_url https://egi-cloud.pd.infn.it:35357
openstack-config --set /etc/cloudkeeper-os/cloudkeeper-os.conf keystone_authtoken username cloudkeeper
openstack-config --set /etc/cloudkeeper-os/cloudkeeper-os.conf keystone_authtoken password CLOUDKEEPER_PASS

Edit the /etc/cloudkeeper-os/voms.json file as the /etc/keystone/voms.json file. Enable and start the service

systemctl enable cloudkeeper-os
systemctl start cloudkeeper-os

Install Indigo IAM

(official guide)

First you need to register your site on Indigo IAM service, then you have to configure keystone to use iam authentication.

  • install mod_auth_openidc
https://github.com/pingidentity/mod_auth_openidc/releases
  • configure mod_auth_openidc

Edit /etc/httpd/conf.d/wsgi-keystone.conf file

(...)
    <VirtualHost *:5000>
 
        (...)
 
        OIDCClaimPrefix                 "OIDC-"
        OIDCResponseType                "code"
        OIDCScope                       "openid email profile"
        OIDCProviderMetadataURL         https://iam-test.indigo-datacloud.eu/.well-known/openid-configuration
        OIDCClientID                    <CLIENT ID>
        OIDCClientSecret                <CLIENT SECRET>
        OIDCProviderTokenEndpointAuth   client_secret_basic
        OIDCCryptoPassphrase            <PASSPHRASE>
        OIDCRedirectURI                 https://<KEYSTONE HOST>:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect
 
        # The JWKs URL on which the Authorization publishes the keys used to sign its JWT access tokens.
        # When not defined local validation of JWTs can still be done using statically configured keys,
        # by setting OIDCOAuthVerifyCertFiles and/or OIDCOAuthVerifySharedKeys.
        OIDCOAuthVerifyJwksUri "https://iam-test.indigo-datacloud.eu/jwk"
 
        <Location ~ "/v3/auth/OS-FEDERATION/websso/oidc">
            AuthType  openid-connect
            Require   valid-user
            LogLevel  debug
        </Location>
 
        <Location ~ "/v3/OS-FEDERATION/identity_providers/indigo-dc/protocols/oidc/auth">
            AuthType  oauth20
            Require   valid-user
            LogLevel  debug
        </Location>
 
        (...)
 
    </VirtualHost>

Substitute the following values:

    <CLIENT ID>: Client ID as obtained from the IAM.
    <CLIENT SECRET>: Client Secret as obtained from the IAM.
    <PASSPHRASE>: A password used for crypto purposes. Put something of your choice here.
    <KEYSTONE HOST>: Your Keystone host.
  • Edit /etc/keystone/keystone.conf
[auth]
methods = external,password,token,oauth1,oidc
oidc = keystone.auth.plugins.mapped.Mapped
 
[oidc]
remote_id_attribute = HTTP_OIDC_ISS
 
[federation]
remote_id_attribute = HTTP_OIDC_ISS
trusted_dashboard = https://<HORIZON ENDPOINT>/dashboard/auth/websso/
sso_callback_template = /etc/keystone/sso_callback_template.html
  • Ensure that /etc/keystone/sso_callback_template.html exists on your system.
  • Keystone Groups, Projects and Mapping setup
openstack group create indigo_group --description "INDIGO Federated users group"
openstack project create indigo --description "INDIGO project"
openstack role add user --group indigo_group --project indigo
openstack role add user --group indigo_group --domain default

Now the federation plugin needs to be setup

  • Load the mapping as follows
openstack identity provider create indigo-dc --remote-id https://iam-test.indigo-datacloud.eu/
openstack federation protocol create oidc --identity-provider indigo-dc --mapping indigo_mapping
openstack mapping set --rules indigo_mapping.json indigo_mapping
  • OpenStack Dashboard (Horizon) Configuration

Edit /etc/openstack-dashboard/local_settings file

WEBSSO_ENABLED = True
WEBSSO_INITIAL_CHOICE = "credentials"
 
WEBSSO_CHOICES = (
    ("credentials", _("Keystone Credentials")),
    ("oidc", _("INDIGO-DataCloud IAM"))
)

Local Monitoring

Ganglia

  • Install ganglia-gmond on all servers
  • Configure cluster and host fields in /etc/ganglia/gmond.conf to point to cld-ganglia.cloud.pd.infn.it server
  • Finally: systemctl enable gmond.service; systemctl start gmond.service

Nagios

  • Install on compute nodes ncsa-client, nagios, nagios-plugins-disk, nagios-plugins-procs, nagios-plugins, nagios-common, nagios-plugins-load
  • Copy the file cld-nagios:/var/spool/nagios/.ssh/id_rsa.pub in a file named /home/nagios/.ssh/authorized_keys of the controller and all compute nodes, and in a file named /root/.ssh/authorized_key of the controller. Be also sure that /home/nagios is the default directory in the /etc/passwd file.
  • Then do in all compute nodes:
$ echo encryption_method=1 > /etc/nagios/send_nsca.cfg
$ usermod -a -G libvirtd nagios
$ sed -i 's|#password=|password=NSCA_PASSWORD|g' /etc/nagios/send_nsca.cfg
# then be sure the files below are in /usr/local/bin:
$ ls /usr/local/bin/
check_kvm  check_kvm_wrapper.sh
$ cat <<EOF > crontab.txt 
# Puppet Name: nagios_check_kvm
0 */1 * * * /usr/local/bin/check_kvm_wrapper.sh
EOF
$ crontab crontab.txt
$ crontab -l
  • On the contoller node check if $ sed -i 's|"compute:create:forced_host": "is_admin:True"|"compute:create:forced_host": ""|g' /etc/nova/policy.json is needed
  • On the cld-nagios server check/modify the content of /var/spool/nagios/egi-cloud-dteam–openrc.sh, of the files /etc/nagios/objects/egi* and /usr/lib64/nagios/plugins/*egi*, and of the files owned by nagios user found in /var/spool/nagios when doing "su - nagios"

Security incindents and IP traceability

See here for the description of the full process On egi-cloud do install the CNRS tools, they allow to track the usage of floating IPs as in the example below:

[root@egi-cloud ~]# os-ip-trace 90.147.77.229
+--------------------------------------+-----------+---------------------+---------------------+
|              device id               | user name |   associating date  | disassociating date |
+--------------------------------------+-----------+---------------------+---------------------+
| 3002b1f1-bca3-4e4f-b21e-8de12c0b926e |   admin   | 2016-11-30 14:01:38 | 2016-11-30 14:03:02 |
+--------------------------------------+-----------+---------------------+---------------------+

Save and archive important log files:

  • On egi-cloud and each compute node cloud-0%, add the line "*.* @@192.168.60.31:514" in the file /etc/rsyslog.conf, and restart rsyslog service with "systemctl restart rsyslog". It logs /var/log/secure,messages files in cld-foreman:/var/mpathd/log/egi-cloud,cloud-0%.
  • In cld-foreman, check that the file /etc/cron.daily/vm-log.sh logs the /var/log/libvirt/qemu/*.log files of egi-cloud and each cloud-0% compute node (passwordless ssh must be enabled from cld-foreman to each node)

Install ulogd in the controller node

yum install -y libnetfilter_log
yum localinstall -y http://repo.iotti.biz/CentOS/7/x86_64/ulogd-2.0.5-2.el7.lux.x86_64.rpm
yum localinstall -y http://repo.iotti.biz/CentOS/7/x86_64/libnetfilter_acct-1.0.2-3.el7.lux.1.x86_64.rpm

and configure /etc/ulogd.conf by replacing properly accept_src_filter variable (accept_src_filter=10.0.0.0/16) starting from the one in cld-ctrl-01:/etc/ulogd.conf. Then copy cld-ctrl-01:/root/ulogd/start-ulogd to egi-cloud:/root/ulogd/start-ulogd, replace the qrouter ID and execute /root/ulogd/start-ulogd. Then add to /etc/rc.d/rc.local the line /root/ulogd/start-ulogd &, and make rc.local executable. Start the service

systemctl enable ulogd
systemctl start ulogd

Finally, be sure that /etc/rsyslog.conf file has the lines "local6.* /var/log/ulogd.log" and "*.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages", and restart rsyslog service.

Troubleshooting

  • Passwordless ssh access to egi-cloud from cld-nagios and from egi-cloud to cloud-0* has been already configured
  • If cld-nagios does not ping egi-cloud, be sure that the rule "route add -net 192.168.60.0 netmask 255.255.255.0 gw 192.168.114.1" has been added in egi-cloud (/etc/sysconfig/network-script/route-em1 file should contain the line: 192.168.60.0/24 via 192.168.114.1)
  • In case of Nagios alarms, try to restart all cloud services doing the following:
$ ssh root@egi-cloud
[root@egi-cloud ~]# ./StartStopServices/complete.sh restart
[root@egi-cloud ~]# for i in $(seq 1 6); do ssh cloud-0$i.pn.pd.infn.it ./StartStopServices/complete.sh restart; done
  • Resubmit the Nagios probe and check if it works again
  • In case the problem persist, check the consistency of the DB by executing (this also fix the issue when quota overview in the dashboard is not consistent with actual VMs active):
[root@egi-cloud ~]# python nova-quota-sync.py
  • In case of EGI Nagios alarm, check that the user running the Nagios probes is not belonging also to tenants other than "ops". Also check that the right image and flavour is set in URL of the service published in the GOCDB.
  • in case of reboot of egi-cloud server:
    • check its network configuration (use IPMI if not reachable): all 4 interfaces must be up and the default gateway must be 90.147.77.254.
    • check DNS in /etc/resolv.conf and GATEWAY in /etc/sysconfig/network
    • check routing with $route -n, if needed do: $ip route replace default via 90.147.77.254. Also be sure to have a route for 90.147.77.0 network.
    • check if storage mountpoints 192.168.61.100:/glance-egi and cinder-egi are properly mounted (do: $ df -h)
  • in case of reboot of cloud-0* server (use IPMI if not reachable): all 3 interfaces must be up and the default destination must have both 192.168.114.1 and 192.168.115.1 gateways
    • check its network configuration
    • check if all partitions in /etc/fstab are properly mounted (do: $ df -h)
  • In case of network instabilities, check if GRO if off for all interfaces, e.g.:
[root@egi-cloud ~]# /sbin/ethtool -k em3 | grep -i generic-receive-offload
generic-receive-offload: off
  • Also check if /sbin/ifup-local is there:
[root@egi-cloud ~]# cat /sbin/ifup-local 
#!/bin/bash
case "$1" in
em1)
/sbin/ethtool -K $1 gro off
;;
em2)
/sbin/ethtool -K $1 gro off
;;
em3)
/sbin/ethtool -K $1 gro off
;;
em4)
/sbin/ethtool -K $1 gro off
;;
esac
exit 0
  • If you need to change the project quotas, do not forget to apply the change to both tenantId and tenantName, due to a knonw bug, e.g.:
[root@egi-cloud ~]# source admin-openrc.sh
[root@egi-cloud ~]# tenantId=$(openstack project list | grep fctf | awk '{print $2}')
[root@egi-cloud ~]# nova quota-update --instances 40 --cores 40 --ram 81840 $tenantId
[root@egi-cloud ~]# nova quota-update --instances 40 --cores 40 --ram 81840 fctf
[root@egi-cloud ~]# neutron quota-update --floatingip 1 --tenant-id $tenantId
[root@egi-cloud ~]# neutron quota-update --floatingip 1 --tenant-id fctf
progetti/cloud-areapd/egi_federated_cloud/newton-centos7_testbed.txt · Last modified: 2018/12/20 13:16 by verlato@infn.it