progetti:cloud-areapd:egi_federated_cloud:juno-ubuntu1404_testbed
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
progetti:cloud-areapd:egi_federated_cloud:juno-ubuntu1404_testbed [2016/09/14 13:23] – verlato@infn.it | progetti:cloud-areapd:egi_federated_cloud:juno-ubuntu1404_testbed [2016/09/23 16:16] (current) – verlato@infn.it | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Juno-Ubuntu1404 Testbed ====== | ||
+ | Fully integrated Resource Provider [[https:// | ||
+ | === EGI Monitoring === | ||
+ | * [[http:// | ||
+ | * [[https:// | ||
+ | * [[http:// | ||
+ | === Local monitoring === | ||
+ | * [[http:// | ||
+ | * [[http:// | ||
+ | * [[http:// | ||
+ | === Local dashboard === | ||
+ | * [[http:// | ||
+ | ===== Layout ===== | ||
+ | |||
+ | * Controller + Network node: **egi-cloud.pd.infn.it** | ||
+ | |||
+ | * Compute nodes: **cloud-01: | ||
+ | |||
+ | * Network layout available [[http:// | ||
+ | |||
+ | ===== GlusterFS Configuration (To be updated) ===== | ||
+ | * see [[https:// | ||
+ | * we assume that partitions are created by preseed, so in the Compute nodes do: | ||
+ | <code bash> | ||
+ | #on cloud-01: | ||
+ | echo -e " | ||
+ | # the it should be: | ||
+ | df -h | ||
+ | Filesystem | ||
+ | / | ||
+ | none 4.0K | ||
+ | udev | ||
+ | tmpfs | ||
+ | none 5.0M | ||
+ | none | ||
+ | none 100M | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | # | ||
+ | mkdir / | ||
+ | mkdir -p / | ||
+ | # | ||
+ | apt-get -y install glusterfs-server | ||
+ | # | ||
+ | # now on cloud-01 only (see [[https:// | ||
+ | for i in `seq 11 16`; do gluster peer probe 192.168.115.$i; | ||
+ | for i in cinder; do gluster volume create ${i}volume replica 2 transport tcp 192.168.115.11:/ | ||
+ | gluster volume info | ||
+ | cat <<EOF >> /etc/fstab | ||
+ | 192.168.115.11:/ | ||
+ | EOF | ||
+ | mount -a | ||
+ | # the same using 192.168.115.12-16 on /etc/fstab other cloud-*, and: | ||
+ | for i in `seq 11 16`; do gluster peer probe 192.168.115.$i; | ||
+ | </ | ||
+ | * In the Controller/ | ||
+ | <code bash> | ||
+ | apt-get -y install glusterfs-client | ||
+ | mkdir -p / | ||
+ | cat<< | ||
+ | 192.168.115.11:/ | ||
+ | EOF | ||
+ | mount -a | ||
+ | </ | ||
+ | * Example of df output on **cloud-01**: | ||
+ | <code bash> | ||
+ | [root@cloud-01 ~]# df -h | ||
+ | Filesystem | ||
+ | / | ||
+ | none 4.0K | ||
+ | udev | ||
+ | tmpfs | ||
+ | none 5.0M | ||
+ | none | ||
+ | none 100M | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | 192.168.115.11:/ | ||
+ | 192.168.115.11:/ | ||
+ | </ | ||
+ | ===== OpenStack configuration ===== | ||
+ | * Controller/ | ||
+ | * We created one tenant for each EGI FedCloud VO supported, a router and various nets and subnets obtaining the following network topology: | ||
+ | {{: | ||
+ | |||
+ | ===== EGI FedCloud specific configuration ===== | ||
+ | |||
+ | (see [[https:// | ||
+ | |||
+ | * Install CAs Certificates and the software for fetching the CRLs in both Controller (egi-cloud) and Compute (cloud-01: | ||
+ | <code bash> | ||
+ | wget -q -O - https:// | ||
+ | echo "deb http:// | ||
+ | aptitude update | ||
+ | apt-get install ca-policy-egi-core | ||
+ | wget http:// | ||
+ | dpkg -i fetch-crl_2.8.5-2_all.deb | ||
+ | fetch-crl | ||
+ | </ | ||
+ | |||
+ | ==== Install OpenStack Keystone-VOMS module ==== | ||
+ | * Prepare to run keystone as WSGI app in SSL | ||
+ | <code bash> | ||
+ | apt-get install python-m2crypto python-setuptools libvomsapi1 -y | ||
+ | apt-get install apache2 libapache2-mod-wsgi -y | ||
+ | a2enmod ssl | ||
+ | cp / | ||
+ | cp / | ||
+ | cat << | ||
+ | Listen 5000 | ||
+ | WSGIDaemonProcess keystone user=keystone group=nogroup processes=8 threads=1 | ||
+ | < | ||
+ | LogLevel | ||
+ | ErrorLog | ||
+ | CustomLog | ||
+ | |||
+ | SSLEngine | ||
+ | SSLCertificateFile | ||
+ | SSLCertificateKeyFile | ||
+ | SSLCACertificatePath | ||
+ | SSLCARevocationPath | ||
+ | SSLVerifyClient | ||
+ | SSLVerifyDepth | ||
+ | SSLProtocol | ||
+ | SSLCipherSuite | ||
+ | SSLOptions | ||
+ | |||
+ | WSGIScriptAlias / / | ||
+ | WSGIProcessGroup keystone | ||
+ | </ | ||
+ | |||
+ | Listen 35357 | ||
+ | WSGIDaemonProcess | ||
+ | < | ||
+ | LogLevel | ||
+ | ErrorLog | ||
+ | CustomLog | ||
+ | |||
+ | SSLEngine | ||
+ | SSLCertificateFile | ||
+ | SSLCertificateKeyFile | ||
+ | SSLCACertificatePath | ||
+ | SSLCARevocationPath | ||
+ | SSLVerifyClient | ||
+ | SSLVerifyDepth | ||
+ | SSLProtocol | ||
+ | SSLCipherSuite | ||
+ | SSLOptions | ||
+ | |||
+ | WSGIScriptAlias | ||
+ | WSGIProcessGroup | ||
+ | </ | ||
+ | EOF | ||
+ | </ | ||
+ | * take the file [[https:// | ||
+ | * copy it to / | ||
+ | <code bash> | ||
+ | mkdir -p / | ||
+ | wget https:// | ||
+ | mv keystone.py / | ||
+ | ln / | ||
+ | ln / | ||
+ | echo export OPENSSL_ALLOW_PROXY_CERTS=1 >>/ | ||
+ | service apache2 restart | ||
+ | </ | ||
+ | * Installing the Keystone-VOMS module: | ||
+ | <code bash> | ||
+ | git clone git:// | ||
+ | cd keystone-voms | ||
+ | pip install . | ||
+ | </ | ||
+ | * Enable the Keystone VOMS module | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | cat << | ||
+ | [filter: | ||
+ | paste.filter_factory = keystone_voms.core: | ||
+ | EOF | ||
+ | sed -i ' | ||
+ | </ | ||
+ | * Configuring the Keystone VOMS module | ||
+ | <code bash> | ||
+ | cat<< | ||
+ | [voms] | ||
+ | vomsdir_path = / | ||
+ | ca_path = / | ||
+ | voms_policy = / | ||
+ | vomsapi_lib = libvomsapi.so.1 | ||
+ | autocreate_users = True | ||
+ | add_roles = False | ||
+ | user_roles = _member_ | ||
+ | EOF | ||
+ | # | ||
+ | mkdir -p / | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | mkdir -p / | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | mkdir -p / | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | for i in ops atlas lhcb cms | ||
+ | do | ||
+ | mkdir -p / | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | cat > / | ||
+ | / | ||
+ | / | ||
+ | EOF | ||
+ | done | ||
+ | # | ||
+ | cat << | ||
+ | { | ||
+ | " | ||
+ | " | ||
+ | | ||
+ | " | ||
+ | " | ||
+ | | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | }, | ||
+ | " | ||
+ | " | ||
+ | } | ||
+ | } | ||
+ | EOF | ||
+ | # | ||
+ | service apache2 restart | ||
+ | </ | ||
+ | * Adjust manually the keystone catalog in order the identity backend points to the correct URLs: | ||
+ | * public URL: https:// | ||
+ | * admin URL: https:// | ||
+ | * internal URL: https:// | ||
+ | <code bash> | ||
+ | mysql> use keystone; | ||
+ | mysql> update endpoint set url=" | ||
+ | mysql> update endpoint set url=" | ||
+ | mysql> select id,url from endpoint; | ||
+ | should show lines with the above URLs. | ||
+ | </ | ||
+ | * Replace http with https in auth_[protocol, | ||
+ | * Replace http with https in auth_[protocol, | ||
+ | * Do the following in both Controller and Compute nodes (see [[http:// | ||
+ | <code bash> | ||
+ | cp / | ||
+ | update-ca-certificates | ||
+ | </ | ||
+ | * Also check if " | ||
+ | |||
+ | ==== Install the OCCI API ==== | ||
+ | |||
+ | (only on Controller node) | ||
+ | <code bash> | ||
+ | pip install pyssf | ||
+ | git clone https:// | ||
+ | cd occi-os | ||
+ | python setup.py install | ||
+ | cat <<EOF >>/ | ||
+ | ######## | ||
+ | # OCCI # | ||
+ | ######## | ||
+ | |||
+ | [composite: | ||
+ | use = egg: | ||
+ | /: occiapppipe | ||
+ | |||
+ | [pipeline: | ||
+ | pipeline = authtoken keystonecontext occiapp | ||
+ | # with request body size limiting and rate limiting | ||
+ | # pipeline = sizelimit authtoken keystonecontext ratelimit occiapp | ||
+ | |||
+ | [app: | ||
+ | use = egg: | ||
+ | EOF | ||
+ | </ | ||
+ | * Make sure the API occiapi is enabled in the / | ||
+ | <code bash> | ||
+ | crudini --set / | ||
+ | crudini --set / | ||
+ | </ | ||
+ | * Add this line in / | ||
+ | <code bash> | ||
+ | crudini --set / | ||
+ | </ | ||
+ | * modify the / | ||
+ | <code bash> | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | </ | ||
+ | * and restart the nova-* services: | ||
+ | <code bash> | ||
+ | for i in nova-api nova-cert nova-consoleauth nova-scheduler nova-conductor nova-novncproxy; | ||
+ | </ | ||
+ | * Register service in Keystone: | ||
+ | <code bash> | ||
+ | keystone service-create --name occi_api --type occi --description 'Nova OCCI Service' | ||
+ | keystone endpoint-create --service-id $(keystone service-list | awk '/ OCCI / {print $2}') --region regionOne --publicurl https:// | ||
+ | </ | ||
+ | * Enable SSL connection on port 8787, by creating the file / | ||
+ | <code bash> | ||
+ | cat << | ||
+ | # | ||
+ | # Proxy Server directives. Uncomment the following lines to | ||
+ | # enable the proxy server: | ||
+ | #LoadModule proxy_module / | ||
+ | #LoadModule proxy_http_module / | ||
+ | #LoadModule substitute_module / | ||
+ | |||
+ | Listen 8787 | ||
+ | < | ||
+ | < | ||
+ | Require all granted | ||
+ | </ | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | |||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | # SSLCARevocationCheck chain | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | < | ||
+ | # Do not enable proxying with ProxyRequests until you have secured | ||
+ | # your server. | ||
+ | # Open proxy servers are dangerous both to your network and to the | ||
+ | # Internet at large. | ||
+ | | ||
+ | |||
+ | < | ||
+ | #Order deny, | ||
+ | #Deny from all | ||
+ | | ||
+ | #Allow from .example.com | ||
+ | </ | ||
+ | |||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | # FilterProvider OCCIFILTER SUBSTITUTE resp=Content-Type $text/ | ||
+ | # FilterProvider OCCIFILTER SUBSTITUTE resp=Content-Type $application/ | ||
+ | < | ||
+ | # | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | </ | ||
+ | </ | ||
+ | EOF | ||
+ | cat<< | ||
+ | nova ALL = (root) NOPASSWD: / | ||
+ | EOF | ||
+ | a2enmod proxy | ||
+ | a2enmod proxy_http | ||
+ | a2enmod substitute | ||
+ | a2ensite occi | ||
+ | service apache2 reload | ||
+ | service apache2 restart | ||
+ | </ | ||
+ | |||
+ | ==== Install rOCCI Client ==== | ||
+ | * We installed the rOCCI client on top of a EMI UI with small changes from this [[https:// | ||
+ | <code bash> | ||
+ | [root@prod-ui-02]# | ||
+ | [root@prod-ui-02]# | ||
+ | [root@prod-ui-02]# | ||
+ | [root@prod-ui-02]# | ||
+ | </ | ||
+ | * As a normal user, an example of usage with basic commands is: | ||
+ | <code bash> | ||
+ | # create ssh-key for accessing VM as cloudadm: | ||
+ | [prod-ui-02]# | ||
+ | [prod-ui-02]# | ||
+ | # | ||
+ | users: | ||
+ | - name: cloudadm | ||
+ | sudo: ALL=(ALL) NOPASSWD: | ||
+ | lock-passwd: | ||
+ | ssh-import-id: | ||
+ | ssh-authorized-keys: | ||
+ | - `cat tmpfedcloud.pub` | ||
+ | EOF | ||
+ | |||
+ | # create your VOMS proxy: | ||
+ | [prod-ui-02]# | ||
+ | ... | ||
+ | |||
+ | # query the Cloud provider to see what is available (flavors and images): | ||
+ | [prod-ui-02]# | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: hpc | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | # | ||
+ | # | ||
+ | # | ||
+ | [prod-ui-02 ~]$ occi --endpoint https:// | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | [[ http:// | ||
+ | title: | ||
+ | term: | ||
+ | location: | ||
+ | ##################################################################################################################### | ||
+ | # | ||
+ | # create a VM of flavor " | ||
+ | [prod-ui-02]# | ||
+ | https:// | ||
+ | # | ||
+ | # assign a floating-ip to the VM: | ||
+ | [prod-ui-02]# | ||
+ | # | ||
+ | # discover the floating-ip assigned: | ||
+ | [prod-ui-02]# | ||
+ | ... | ||
+ | occi.networkinterface.address = 90.147.77.226 | ||
+ | occi.core.target = / | ||
+ | occi.core.source = / | ||
+ | occi.core.id = / | ||
+ | ... | ||
+ | # | ||
+ | # access the VM via ssh: | ||
+ | [prod-ui-02]# | ||
+ | Enter passphrase for key ' | ||
+ | Welcome to Ubuntu 14.04 | ||
+ | ... | ||
+ | </ | ||
+ | ==== Install FedCloud BDII ==== | ||
+ | |||
+ | * See the guide [[https:// | ||
+ | * Installing the resource bdii and the cloud-info-provider: | ||
+ | <code bash> | ||
+ | apt-get install bdii | ||
+ | git clone https:// | ||
+ | cd BDIIscripts | ||
+ | pip install . | ||
+ | </ | ||
+ | * Customize the configuration file with the local sites' infos | ||
+ | <code bash> | ||
+ | cp BDIIscripts/ | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | sed -i ' | ||
+ | </ | ||
+ | * Be sure that keystone contains the OCCI endpoint, otherwise it will not be published by the BDII: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# keystone service-list | ||
+ | [root@egi-cloud ~]# keystone service-create --name nova-occi --type occi --description 'Nova OCCI Service' | ||
+ | [root@egi-cloud ~]# keystone endpoint-create --service_id <the one obtained above> --region RegionOne --publicurl https:// | ||
+ | </ | ||
+ | * By default, the provider script will filter images without marketplace uri defined into the marketplace or vmcatcher_event_ad_mpuri property. If you want to list all the images templates (included local snapshots), set the variable ' | ||
+ | * Create the file / | ||
+ | <code bash> | ||
+ | #!/bin/sh | ||
+ | |||
+ | cloud-info-provider-service --yaml / | ||
+ | --middleware openstack \ | ||
+ | --os-username < | ||
+ | --os-tenant-name < | ||
+ | </ | ||
+ | * Run manually the cloud-info-provider script and check that the output return the complete LDIF. To do so, execute: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# chmod +x / | ||
+ | [root@egi-cloud ~]# / | ||
+ | </ | ||
+ | * Now you can start the bdii service: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# service bdii start | ||
+ | </ | ||
+ | * Use the command below to see if the information is being published: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# ldapsearch -x -h localhost -p 2170 -b o=glue | ||
+ | </ | ||
+ | * Information on how to set up the site-BDII in egi-cloud-sbdii.pd.infn.it is available [[https:// | ||
+ | * Add your cloud-info-provider to your site-BDII egi-cloud-sbdii.pd.infn.it by adding new lines in the site.def like this: | ||
+ | <code bash> | ||
+ | BDII_REGIONS=" | ||
+ | BDII_CLOUD_URL=" | ||
+ | BDII_BDII_URL=" | ||
+ | </ | ||
+ | |||
+ | ==== Install vmcatcher/ | ||
+ | |||
+ | * VMcatcher allows users to subscribe to virtual machine Virtual Machine image lists, cache the images referenced to in the Virtual Machine Image List, validate the images list with x509 based public key cryptography, | ||
+ | |||
+ | <code bash> | ||
+ | useradd -m -b /opt stack | ||
+ | STACKHOME=/ | ||
+ | apt-get install python-m2crypto python-setuptools qemu-utils -y | ||
+ | pip install nose | ||
+ | git clone https:// | ||
+ | hepixvmitrust-0.0.18 | ||
+ | git clone https:// | ||
+ | git clone https:// | ||
+ | wget http:// | ||
+ | wget http:// | ||
+ | tar -zxvf python-glancepush-0.0.6.tar.gz -C $STACKHOME/ | ||
+ | tar -zxvf gpvcmupdate-0.0.7.tar.gz -C $STACKHOME/ | ||
+ | for i in hepixvmitrust smimeX509validation vmcatcher $STACKHOME/ | ||
+ | do | ||
+ | cd $i | ||
+ | python setup.py install | ||
+ | echo exit code=$? | ||
+ | cd | ||
+ | done | ||
+ | mkdir -p / | ||
+ | ln -fs / | ||
+ | mkdir -p $STACKHOME/ | ||
+ | mkdir -p / | ||
+ | cp / | ||
+ | </ | ||
+ | * Now for each VO/tenant you have in voms.json write a file like this: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# su - stack | ||
+ | [stack@egi-cloud ~]# cat << EOF > / | ||
+ | [general] | ||
+ | # Tenant for this VO. Must match the tenant defined in voms.json file | ||
+ | testing_tenant=dteam | ||
+ | # Identity service endpoint (Keystone) | ||
+ | endpoint_url=https:// | ||
+ | # User Password | ||
+ | password=ADMIN_PASS | ||
+ | # User | ||
+ | username=admin | ||
+ | # Set this to true if you're NOT using self-signed certificates | ||
+ | is_secure=True | ||
+ | # SSH private key that will be used to perform policy checks (to be done) | ||
+ | ssh_key=/ | ||
+ | # WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems | ||
+ | # | ||
+ | EOF | ||
+ | </ | ||
+ | * and for images not belonging to any VO use the admin tenant | ||
+ | <code bash> | ||
+ | [stack@egi-cloud ~]# cat << EOF > / | ||
+ | [general] | ||
+ | # Tenant for this VO. Must match the tenant defined in voms.json file | ||
+ | testing_tenant=admin | ||
+ | # Identity service endpoint (Keystone) | ||
+ | endpoint_url=https:// | ||
+ | # User Password | ||
+ | password=ADMIN_PASS | ||
+ | # User | ||
+ | username=admin | ||
+ | # Set this to true if you're NOT using self-signed certificates | ||
+ | is_secure=True | ||
+ | # SSH private key that will be used to perform policy checks (to be done) | ||
+ | ssh_key=/ | ||
+ | # WARNING: Only define the next variable if you're going to need it. Otherwise you may encounter problems | ||
+ | # | ||
+ | EOF | ||
+ | </ | ||
+ | * Check that vmcatcher is running properly by listing and subscribing to an image list | ||
+ | <code bash> | ||
+ | [stack@egi-cloud ~]# cat << | ||
+ | export VMCATCHER_RDBMS=" | ||
+ | export VMCATCHER_CACHE_DIR_CACHE=" | ||
+ | export VMCATCHER_CACHE_DIR_DOWNLOAD=" | ||
+ | export VMCATCHER_CACHE_DIR_EXPIRE=" | ||
+ | EOF | ||
+ | [stack@egi-cloud ~]# export VMCATCHER_RDBMS=" | ||
+ | [stack@egi-cloud ~]# vmcatcher_subscribe -l | ||
+ | [stack@egi-cloud ~]# vmcatcher_subscribe -e -s https://< | ||
+ | [stack@ocp-ctrl ~]$ vmcatcher_subscribe -l | ||
+ | 76fdee70-8119-5d33-9f40-3c57e1c60df1 | ||
+ | </ | ||
+ | * Create a CRON wrapper for vmcatcher, named $STACKHOME/ | ||
+ | <code bash> | ||
+ | #!/bin/bash | ||
+ | #Cron handler for VMCatcher image syncronization script for OpenStack | ||
+ | |||
+ | #Vmcatcher configuration variables | ||
+ | export VMCATCHER_RDBMS=" | ||
+ | export VMCATCHER_CACHE_DIR_CACHE=" | ||
+ | export VMCATCHER_CACHE_DIR_DOWNLOAD=" | ||
+ | export VMCATCHER_CACHE_DIR_EXPIRE=" | ||
+ | export VMCATCHER_CACHE_EVENT=" | ||
+ | |||
+ | #Update vmcatcher image lists | ||
+ | / | ||
+ | |||
+ | #Add all the new images to the cache | ||
+ | for a in $(/ | ||
+ | / | ||
+ | done | ||
+ | |||
+ | #Update the cache | ||
+ | / | ||
+ | |||
+ | #Run glancepush | ||
+ | python / | ||
+ | </ | ||
+ | * Add admin user to the tenants and set the right ownership to directories | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# for vo in atlas cms lhcb dteam ops wenmr fctf; do keystone user-role-add --user admin --tenant $vo --role _member_; done | ||
+ | [root@egi-cloud ~]# chown -R stack:stack $STACKHOME | ||
+ | </ | ||
+ | * Test that the vmcatcher handler is working correctly by running: | ||
+ | <code bash> | ||
+ | [stack@egi-cloud ~]# chmod +x $STACKHOME/ | ||
+ | [stack@egi-cloud ~]# $STACKHOME/ | ||
+ | </ | ||
+ | * Add the following line to the stack user crontab: | ||
+ | <code bash> | ||
+ | 50 */6 * * * $STACKHOME/ | ||
+ | </ | ||
+ | * Useful links for getting VO-wide image lists that need authentication to AppDB: [[https:// | ||
+ | |||
+ | ==== Use the same APEL/SSM of grid site ==== | ||
+ | * Cloud usage records are sent to APEL through the ssmsend program installed in cert-37.pd.infn.it: | ||
+ | <code bash> | ||
+ | [root@cert-37 ~]# cat / | ||
+ | # send buffered usage records to APEL | ||
+ | 30 */24 * * * root / | ||
+ | </ | ||
+ | * It si therefore neede to install and configure NFS on egi-cloud: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# mkdir -p / | ||
+ | [root@egi-cloud ~]# apt-get install nfs-kernel-server -y | ||
+ | [root@egi-cloud ~]# cat<< | ||
+ | / | ||
+ | EOF | ||
+ | [root@egi-cloud ~]$ service nfs-kernel-server restart | ||
+ | </ | ||
+ | * In case of APEL nagios probe failure, check if / | ||
+ | * To check if accounting records are properly received by APEL server look at [[http:// | ||
+ | |||
+ | ==== Install the new accounting system (CASO) ==== | ||
+ | |||
+ | * Following instructions [[https:// | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]$ pip install caso | ||
+ | </ | ||
+ | * Copy the CA certs bundle in the right place | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# cd / | ||
+ | [root@egi-cloud ~]# cp / | ||
+ | [root@egi-cloud ~]# mv cacert.pem cacert.pem.bak; | ||
+ | </ | ||
+ | * Configure / | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]$ mkdir / | ||
+ | [root@egi-cloud ~]$ caso-extract -v -d | ||
+ | </ | ||
+ | * Create the cron job | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# cat / | ||
+ | # extract and send usage records to APEL/ | ||
+ | 10 * * * * root / | ||
+ | </ | ||
+ | |||
+ | ==== Troubleshooting ==== | ||
+ | |||
+ | * Passwordless ssh access to egi-cloud from cld-nagios and from egi-cloud to cloud-0* has been already configured | ||
+ | * If cld-nagios does not ping egi-cloud, be sure that the rule "route add -net 192.168.60.0 netmask 255.255.255.0 gw 192.168.114.1" | ||
+ | * In case of Nagios alarms, try to restart all cloud services doing the following: | ||
+ | <code bash> | ||
+ | $ ssh root@egi-cloud | ||
+ | [root@egi-cloud ~]# ./ | ||
+ | [root@egi-cloud ~]# for i in $(seq 1 6); do ssh cloud-0$i.pn.pd.infn.it ./ | ||
+ | </ | ||
+ | * Resubmit the Nagios probe and check if it works again | ||
+ | * In case the problem persist, check the consistency of the DB by executing: | ||
+ | <code bash> | ||
+ | [root@egi-cloud ~]# python nova-quota-sync.py | ||
+ | </ | ||
+ | * In case of EGI Nagios alarm, check that the user running the Nagios probes is not belonging also to tenants other than " | ||
+ | * in case of reboot of egi-cloud server: | ||
+ | * check its network configuration (use IPMI if not reachable): all 3 interfaces must be up and the default gateway must be 192.168.114.1 | ||
+ | * check DNS in / | ||
+ | * check routing with $route -n, if needed do: $route add default gw 192.168.114.1 dev em1 and $route del default gw 90.147.77.254 dev em3. Also be sure not to have a route for 90.147.77.0 network. | ||
+ | * disable the keystone native service (do: $ service keystone stop) and restart all cloud services | ||
+ | * check if gluster mountpoints are properly mounted | ||
+ | * in case of reboot of cloud-0* server (use IPMI if not reachable): all 2 interfaces must be up and the default gateway must be 192.168.114.1 | ||
+ | * check its network configuration | ||
+ | * check if all partitions in /etc/fstab are properly mounted (do: $ df -h) | ||