Table of Contents
AAI integrations in Openstack Mitaka
Authors: Paolo Andreetto (INFN Padova)
Requirements
- CentOS Linux release 7.1
- Openstack from "Kilo" to "Mikata" release
Shibboleth installation
Installing repositories
wget -O /etc/yum.repos.d/EGI-trustanchors.repo http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo wget -O /etc/yum.repos.d/shibboleth.repo http://download.opensuse.org/repositories/security://shibboleth/CentOS_7/security:shibboleth.repo
Installing required modules
yum -y install ca-policy-egi-core fetch-crl shibboleth httpd mod_ssl
Starting cron service "fetch-crl-cron"
systemctl enable fetch-crl-cron && systemctl start fetch-crl-cron
Installing certificate - key and setting their permissions
Deploy the service certificate file in /etc/shibboleth/sp-cert.pem and the related service key file in /etc/shibboleth/sp-key.pem. Change the ownership and permissions for those files:
chmod 400 /etc/shibboleth/sp-key.pem chmod 600 /etc/shibboleth/sp-cert.pem chown shibd.shibd /etc/shibboleth/sp-key.pem chown shibd.shibd /etc/shibboleth/sp-cert.pem
Shibboleth service’s configuration
Downloading the attribute-map file
wget -O /etc/shibboleth/attribute-map.xml http://wiki.infn.it/_media/cn/ccr/aai/howto/attribute-map.xml
Configuring the shibboleth daemon
The file /etc/shibboleth/shibboleth2.xml must contain the following definitions:
- shibboleth2.xml
<SPConfig xmlns="urn:mace:shibboleth:2.0:native:sp:config" xmlns:conf="urn:mace:shibboleth:2.0:native:sp:config" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" clockSkew="180"> <ApplicationDefaults id="default" entityID="https://cloud-areapd.pd.infn.it/dashboard-shib" REMOTE_USER="eppn persistent-id targeted-id"> <Sessions lifetime="28800" timeout="3600" relayState="ss:mem" checkAddress="false" handlerSSL="true" cookieProps="https"> <SSO target="https://cloud-areapd.pd.infn.it/dashboard-shib/auth/login/" entityID="https://idp.infn.it/saml2/idp/metadata.php"> SAML2 SAML1 </SSO> <Logout>SAML2 Local</Logout> <Handler type="MetadataGenerator" Location="/Metadata" template="/etc/openstack-auth-shib/idem-template-metadata.xml" signing="true"/> <Handler type="Status" Location="/Status" acl="127.0.0.1 ::1"/> <Handler type="Session" Location="/Session" showAttributeValues="false"/> <Handler type="DiscoveryFeed" Location="/DiscoFeed"/> </Sessions> <Errors redirectErrors="https://cloud-areapd.pd.infn.it/dashboard/auth/auth_error/"/> <MetadataProvider type="Chaining"> <MetadataProvider type="XML" uri="https://www.garr.it/idem-metadata/idem-metadata-sha256.xml" backingFilePath="/var/cache/shibboleth/idem-metadata.xml" reloadInterval="7200"/> </MetadataProvider> <AttributeExtractor type="XML" validate="true" reloadChanges="false" path="attribute-map.xml"/> <AttributeResolver type="Query" subjectMatch="true"/> <AttributeFilter type="XML" validate="true" path="attribute-policy.xml"/> <CredentialResolver type="File"> <Key><Path>/etc/shibboleth/sp-key.pem</Path></Key> <Certificate> <Path>/etc/shibboleth/sp-cert.pem</Path> <Path>/etc/grid-security/certificates/INFN-CA-2015.pem</Path> </Certificate> <CRL> <Path>/etc/grid-security/certificates/49f18420.r0</Path> </CRL> </CredentialResolver> </ApplicationDefaults> <SecurityPolicyProvider type="XML" validate="true" path="security-policy.xml"/> <ProtocolProvider type="XML" validate="true" reloadChanges="false" path="protocols.xml"/> </SPConfig>
Verifying the configuration procedure
Use the command:
LD_LIBRARY_PATH=/opt/shibboleth/lib64 runuser -s /bin/bash -c 'shibd -t' -- shibd
(see note below)
Starting the shibboleth daemon
systemctl enable shibd && systemctl start shibd
Troubleshooting | |
---|---|
if the command above reports the error: CRIT Shibboleth.Application : error building CredentialResolver: Unable to load CRL(s) from file then it is necessary to run the fetch-crl command manually: cd /etc/grid-security/certificates && fetch-crl -v |
|
if the command above reports the errors: CRIT XMLTooling.Config : libcurl lacks OpenSSL-specific options, this will greatly limit functionality ERROR XMLTooling.libcurl.InputStream : error while fetching https://www.idem.garr.it/docs/conf/idem-metadata.xml: (59) Unknown cipher in list ERROR XMLTooling.libcurl.InputStream : on Red Hat 6+, make sure libcurl used is built with OpenSSL ERROR XMLTooling.ParserPool : fatal error on line 0, column 0, message: internal error in NetAccessor ERROR OpenSAML.MetadataProvider.XML : error while loading resource (https://www.idem.garr.it/docs/conf/idem-metadata.xml): XML error(s) during parsing ERROR OpenSAML.MetadataProvider.XML : metadata instance was invalid at time of acquisition CRIT OpenSAML.Metadata.Chaining : failure initializing MetadataProvider: Metadata instance was invalid at time of acquisition. even if the message is marked as critical, those errors can be ignored. On many RedHat/Fedora installation a different version of libcurl is required, the library is located in /opt/shibboleth/lib64. The shibboleth daemon calls the configuration script /etc/sysconfig/shibd in order to overwrite the system library. In case it is possible to remove the error running the command LD_LIBRARY_PATH=/opt/shibboleth/lib64 shibd -t |
HTTP service’s configuration
Configuration of the module "apache-ssl"
if the module isn't already configured, define the following attributes:
SSLCertificateFile /etc/grid-security/hostcert.pem SSLCertificateKeyFile /etc/grid-security/hostkey.pem
in the configuration file for SSL apache plugin. In general the file is /etc/httpd/conf.d/ssl.conf
. If the dashboard has been deployed using packstack the file is /etc/httpd/conf.d/15-horizon_ssl_vhost.conf
Configuration of the service “shibbolet” for the Openstack-Dashboard
In the file /etc/httpd/conf.d/shib.conf
insert the following section:
<Location /dashboard-shib> AuthType shibboleth ShibRequestSetting requireSession 1 require shib-session </Location>
Configuration file of the Dashboard
Add the following instructions, if not already present, in the file /etc/httpd/conf.d/openstack-dashboard.conf (or /etc/httpd/conf.d/15-horizon_ssl_vhost.conf if packstack have been used):
WSGIScriptAlias /dashboard-shib /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
In order to avoid security holes it is necessary to activate the UseCanonicalName option for any virtual host or location protected by shibboleth
ServerName https://cloud-areapd.pd.infn.it
UseCanonicalName On
Restarting of the service ''httpd''
systemctl restart httpd
Installation of the AAI integration
Download the repository containing the integration packages
Repository for Kilo:
wget -O /etc/yum.repos.d/openstack-security-integrations.repo http://igi-01.pd.infn.it/mrepo/CAP/openstack-security-integrations_centos7_kilo.repo
Repository for Mitaka:
wget -O /etc/yum.repos.d/openstack-security-integrations.repo http://igi-01.pd.infn.it/mrepo/CAP/openstack-security-integrations_centos7_mitaka.repo
Installation of the keystone plugin
yum -y install keystone-skey-auth
Installation of the dashboard wrappers
For the project Cloud Area Padovana:
yum -y install openstack-auth-cap
For the project Cloud Veneto:
yum -y install openstack-auth-cedc
Generating the secret key
The secret key is the shared secret between Horizon and Keystone. It must be deployed by hand on both side and it can be generated with the following command:
python -c "from horizon.utils import secret_key; print secret_key.generate_key(32)"
The generated key must be specified, between double quotes, as the parameter "KEYSTONE_SECRET_KEY" in the file /etc/openstack-dashboard/local_settings:
KEYSTONE_SECRET_KEY = "PeYjxbEzJZ4ZLj1AJCBvUar5fSfJOAkq"
Setting up the database
In the file /etc/openstack-dashboard/local_settings define the parameter for the database according to the Django requirements. This snippet is an example for a mysql based installation:
DATABASES = { 'default': { 'ENGINE' : 'django.db.backends.mysql', 'NAME' : 'horizon_aai', 'USER' : 'horizonaai', 'PASSWORD' : '*********', 'HOST' : 'cloud-areapd.pd.infn.it', 'PORT' : '3306' } }
The database must be created manually and all permissions granted before performing any further action:
CREATE DATABASE horizon_aai; GRANT ALL ON horizon_aai.* TO 'horizonaai'@'cloud-areapd.pd.infn.it' IDENTIFIED BY '*********'; GRANT ALL ON horizon_aai.* TO 'horizonaai'@'localhost' IDENTIFIED BY '*********';
The database can be populated with the command:
runuser -s /bin/bash -c 'python /usr/share/openstack-dashboard/manage.py migrate' -- apache
The creation of an admin user in the database is not required.
Setting up the notication system
The notification system must be configured according to Django requirements. The file to be modified is /etc/openstack-dashboard/local_settings. Several notifications are sent directly to site administrators, their addresses must be defined in variable MANAGERS
This snippet is an example of configuration for accessing a protected SMTP server:
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = '****.pd.infn.it' EMAIL_PORT = 587 EMAIL_HOST_USER = '******' EMAIL_HOST_PASSWORD = '******' SERVER_EMAIL = 'cloud@lists.pd.infn.it' MANAGERS = (('Cloud Support', 'cloud-support@lists.pd.infn.it'),)
Install notification templates
In the file /etc/openstack-dashboard/local_settings the following definition must be defined:
NOTIFICATION_TEMPLATE_DIR = '/etc/openstack-auth-shib/notifications'
Other changes
It is necessary to force the version 3 for keystone API. In the file /etc/openstack-dashboard/local_settings the following definitions must be present
OPENSTACK_API_VERSIONS = { "identity": 3 } OPENSTACK_HOST = "cloud-areapd.pd.infn.it" # Keystone accessible in plaintext #OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # Keystone protected with SSL/TLS OPENSTACK_KEYSTONE_URL = "https://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_SSL_CACERT = "/etc/grid-security/certificates/INFN-CA-2006.pem"
In the configuration file /etc/openstack-dashboard/local_settings the URL for help must be defined. For the project Cloud Area Padovana:
HORIZON_CONFIG = { #other definitions 'help_url': "http://www.pd.infn.it/cloud/Users_Guide/html-desktop/", }
For the project Cloud Veneto:
HORIZON_CONFIG = { #other definitions 'help_url': "https://cloud.cedc.csia.unipd.it/User_Guide/index.html", }
Since the configuration file of the dashboard contains sensitive parameters it is necessary to change its permissions:
chmod 640 /etc/openstack-dashboard/local_settings && chown root.apache /etc/openstack-dashboard/local_settings
Restarting of the service "httpd"
systemctl restart httpd
Troubleshooting | |
---|---|
If the log in the file /var/log/horizon/horizon.log reports the error: You have offline compression enabled but key "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" is missing from offline manifest. You may need to run "python manage.py compress" it is necessary to run the stylesheet compression utility manually:yum -y install python-lesscpy cd /usr/share/openstack-dashboard/ && python manage.py compress systemctl restart httpd |
Tips | |
---|---|
The log for Horizon can be enabled defining a new handler, a new formatter and a new logger in the LOGGING table of the file /etc/openstack-dashboard/local_settings:LOGGING = { 'formatters': { 'verbose': { 'format': '%(asctime)s %(process)d %(levelname)s %(name)s ' '%(message)s' }, }, # other definitions 'handlers': { # other definitions 'file': { 'level': 'DEBUG', 'class': 'logging.FileHandler', 'filename': '/var/log/horizon/horizon.log', 'formatter': 'verbose', }, } 'loggers': { # other definitions 'openstack_auth_shib': { 'handlers': ['file'], 'level': 'DEBUG', 'propagate': False, }, } then the handler file can be used in the loggers such as 'horizon': { 'handlers': ['file'], 'level': 'DEBUG', 'propagate': False, } . For further details about logging see the python documentation |
|
If you're configuring SSL support for connections between Horizon and Keystone don't use the IP address for OPENSTACK_HOST, use the fully qualified name as reported in the host certificate | |
It's strongly recommanded to use memcached for storing session attributes, instead of signed cookies. Login cannot be correctly performed if too many data are stored in a cookie. The cache definition is specified into the file /etc/openstack-dashboard/local_settings:SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } The memcache daemon must be running systemctl status memcached For further details refer to the django session guide |
|
It is possible to restore manually the standard Openstack logos with the following commands: cp /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-orig.png \ /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo.png cp /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash-orig.png \ /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash.png cp /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/favicon-orig.ico \ /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/favicon.ico |
|
If necessary the service metadata (service description, information URLs, etc.) can be customized editing the file /etc/openstack-auth-shib/idem-template-metadata.xml |
Configuration of the Keystone service
Configuration of the authentication plugin "skey"
Change the "auth" section in the file /etc/keystone/keystone.conf in the following way:
[auth] methods=sKey,password,token password=keystone.auth.plugins.password.Password token=keystone.auth.plugins.token.Token sKey = keystone_skey_auth.skey.SecretKeyAuth
Create a new section in the file /etc/keystone/keystone.conf:
[skey] secret_key = "PeYjxbEzJZ4ZLj1AJCBvUar5fSfJOAkq"
The secret key defined in the keystone configuration file is the same key specified by the parameter "KEYSTONE_SECRET_KEY" in the file /etc/openstack-dashboard/local_settings
Configure fernet tokens support in the service with the following definitions in /etc/keystone/keystone.conf:
[token] provider = keystone.token.providers.fernet.Provider [fernet_tokens] key_repository = /etc/keystone/fernet-keys
Create the fernet key repository:
mkdir -p /etc/keystone/fernet-keys && chown keystone.keystone /etc/keystone/fernet-keys && chmod 750 /etc/keystone/fernet-keys keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
Restarting the keystone service
If the keystone service is running in stand-alone mode:
systemctl restart openstack-keystone
If the keystone service is running as a WSGI application in Apache:
systemctl restart httpd
Configuration of the cron scripts
Create the configuration file /etc/openstack-auth-shib/actions.conf with the following definitions:
USERNAME=admin TENANTNAME=admin PASSWD=**** AUTHURL=https://cloud-areapd.pd.infn.it:35357/v3/ CAFILE=/etc/grid-security/certificates/INFN-CA-2015.pem NOTIFICATION_PLAN=5,10,20
The configuration file must be readable only by root:
chmod 600 /etc/openstack-auth-shib/actions.conf
Create the cron file /etc/cron.d/openstack-auth-shib-cron with the following content:
5 0 * * * root python /usr/share/openstack-dashboard/manage.py checkexpiration --config /etc/openstack-auth-shib/actions.conf --logconf /etc/openstack-auth-shib/logging.conf 2>/dev/null 10 0 * * * root python /usr/share/openstack-dashboard/manage.py notifyexpiration --config /etc/openstack-auth-shib/actions.conf --logconf /etc/openstack-auth-shib/logging.conf 2>/dev/null 0 9 * * 1 root python /usr/share/openstack-dashboard/manage.py pendingsubscr --config /etc/openstack-auth-shib/actions.conf --logconf /etc/openstack-auth-shib/logging.conf 2>/dev/null
Tips | |
---|---|
Since the script accesses the database, for installations on multiple nodes which share the same backend, it's recommanded to have different crontab configurations for different nodes | |
The configuration file for the logging system of all the scripts is /etc/openstack-auth-shib/logging.conf |
The guest project
The guest project can be created by the cloud administrator directly with the dashboard. From the "Projects" page create a new project and set the "Guest Project" flag. Only one guest project can be created.
Setup for INFN-AAI testing
In the file /etc/shibboleth/shibboleth2.xml a new metadata provider in the chain must be defined:
<MetadataProvider type="Chaining"> <MetadataProvider type="XML" uri="https://idp.infn.it/testing/saml2/idp/metadata.php" backingFilePath="/var/cache/shibboleth/idp.infn.it-testing-metadata.xml"/> </MetadataProvider>
and the entityID must point to the corresponding URL
<SSO target="https://cloud-areapd.pd.infn.it/dashboard-shib/auth/login/" entityID="https://idp.infn.it/testing/saml2/idp/metadata.php"> SAML2 SAML1 </SSO>
Setup for UniPD-IdP (production)
In the file /etc/httpd/conf.d/shib.conf a new location must be added:
<Location /dashboard-unipd> AuthType shibboleth ShibRequestSetting requireSession 1 ShibRequestSetting applicationId default ShibRequestSetting target https://cloud-areapd.pd.infn.it/dashboard-unipd/auth/login/ ShibRequestSetting entityID https://shibidp.cca.unipd.it/idp/shibboleth require shib-session </Location>
A new alias must be created in the file /etc/httpd/conf.d/openstack-dashboard.conf (or /etc/httpd/conf.d/15-horizon_ssl_vhost.conf if packstack have been used):
WSGIScriptAlias /dashboard-unipd /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
In the file /etc/openstack-dashboard/local_settings the following parameter must be defined:
HORIZON_CONFIG['identity_providers'] = [ {'id' : 'unipd', 'type' : 'production', 'path' : '/dashboard-unipd/auth/login/', 'description' : 'UniPD IdP', 'logo' : '/dashboard/static/dashboard/img/logoUniPD.png'} ]
Restart the daemons:
systemctl restart shibd systemctl restart httpd
Setup for IDEM (testing)
The public key must be downloaded from IDEM site:
wget https://www.idem.garr.it/documenti/doc_download/321-idem-metadata-signer-2019 -O /etc/shibboleth/idem_signer_2019.pem chmod 444 /etc/shibboleth/idem_signer_2019.pem
In the file /etc/shibboleth/shibboleth2.xml the following definitions must be specified:
<RequestMapper type="Native"> <RequestMap applicationId="default"> <Host scheme="https" name="cloud-areapd.pd.infn.it"> <Path name="dashboard-idem" applicationId="idem-app"/> </Host> </RequestMap> </RequestMapper> <ApplicationDefaults id="default" entityID="https://cloud-areapd.pd.infn.it/dashboard-shib" REMOTE_USER="eppn persistent-id targeted-id"> <!-- previous definitions --> <MetadataProvider type="Chaining"> <!-- previous definitions --> <MetadataProvider type="XML" uri="http://www.garr.it/idem-metadata/idem-test-metadata-sha256.xml" backingFilePath="/var/cache/shibboleth/idem-test-metadata-sha256.xml"> <MetadataFilter type="Signature" certificate="/etc/shibboleth/idem_signer_2019.pem"/> </MetadataProvider> </MetadataProvider> <ApplicationOverride id="idem-app"> <Sessions lifetime="28800" timeout="3600" relayState="ss:mem" checkAddress="false" handlerSSL="true" cookieProps="https"> <SSO target="https://cloud-areapd.pd.infn.it/dashboard-idem/auth/login/" discoveryProtocol="WAYF" discoveryURL="https://wayf.idem-test.garr.it/WAYF/"> SAML2 SAML1 </SSO> </Sessions> </ApplicationOverride> </ApplicationDefaults>
In the file /etc/httpd/conf.d/shib.conf a new location must be added:
<Location /dashboard-idem> AuthType shibboleth ShibRequestSetting requireSession 1 require shib-session </Location>
A new alias must be created in the file /etc/httpd/conf.d/openstack-dashboard.conf (or /etc/httpd/conf.d/15-horizon_ssl_vhost.conf if packstack have been used):
WSGIScriptAlias /dashboard-idem /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
In the file /etc/openstack-dashboard/local_settings the following parameter must be defined:
HORIZON_CONFIG['identity_providers'] = [ {'id' : 'idem', 'type' : 'experimental', 'path' : '/dashboard-idem/auth/login/', 'description' : 'IDEM federation', 'logo' : '/dashboard/static/dashboard/img/logoIDEM.png'} ]
References
- INFN AAI Support: aai-support@lists.infn.it
- UniPD SSO Support : supporto.sso@unipd.it