User Tools

Site Tools


progetti:cloud-areapd:egi_federated_cloud:rocky-centos7_testbed

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
progetti:cloud-areapd:egi_federated_cloud:rocky-centos7_testbed [2019/12/04 14:54]
verlato@infn.it
progetti:cloud-areapd:egi_federated_cloud:rocky-centos7_testbed [2020/02/03 16:53] (current)
verlato@infn.it [Local Monitoring]
Line 37: Line 37:
  
 ===== OpenStack configuration ===== ===== OpenStack configuration =====
 +
 Controller/​Network node and Compute nodes were installed according to [[http://​docs.openstack.org/​rocky|OpenStack official documentation]] Controller/​Network node and Compute nodes were installed according to [[http://​docs.openstack.org/​rocky|OpenStack official documentation]]
  
Line 331: Line 332:
  
 For publicly exposing on https some OpenStack services do not forget to create the files /​etc/​httpd/​conf.d/​wsgi-nova,​neutron,​glance,​cinder.conf and set the corresponding endpoints before to restart everyhting. For publicly exposing on https some OpenStack services do not forget to create the files /​etc/​httpd/​conf.d/​wsgi-nova,​neutron,​glance,​cinder.conf and set the corresponding endpoints before to restart everyhting.
 +
 ==== Install FedCloud BDII ==== ==== Install FedCloud BDII ====
 +
 (See [[https://​egi-federated-cloud-integration.readthedocs.io/​en/​latest/​openstack.html#​egi-information-system|EGI integration guide]] and [[https://​github.com/​EGI-Foundation/​cloud-info-provider|BDII configuration guide]]) (See [[https://​egi-federated-cloud-integration.readthedocs.io/​en/​latest/​openstack.html#​egi-information-system|EGI integration guide]] and [[https://​github.com/​EGI-Foundation/​cloud-info-provider|BDII configuration guide]])
 Installing the resource bdii and the cloud-info-provider in **egi-cloud-ha** (with CMD-OS repo already installed): Installing the resource bdii and the cloud-info-provider in **egi-cloud-ha** (with CMD-OS repo already installed):
Line 394: Line 397:
 BDII_BDII_URL="​ldap://​egi-cloud-sbdii.pd.infn.it:​2170/​mds-vo-name=resource,​o=grid"​ BDII_BDII_URL="​ldap://​egi-cloud-sbdii.pd.infn.it:​2170/​mds-vo-name=resource,​o=grid"​
 </​code>​ </​code>​
 +
 ==== Use the same APEL/SSM of grid site ==== ==== Use the same APEL/SSM of grid site ====
 +
 Cloud usage records are sent to APEL through the ssmsend program installed in **cert-37.pd.infn.it**:​ Cloud usage records are sent to APEL through the ssmsend program installed in **cert-37.pd.infn.it**:​
 <code bash> <code bash>
Line 413: Line 418:
  
 To check if accounting records are properly received by APEL server look at [[http://​goc-accounting.grid-support.ac.uk/​cloudtest/​cloudsites2.html|this site]] To check if accounting records are properly received by APEL server look at [[http://​goc-accounting.grid-support.ac.uk/​cloudtest/​cloudsites2.html|this site]]
 +
 ==== Install the accounting system (cASO) ==== ==== Install the accounting system (cASO) ====
  
Line 463: Line 469:
 EOF EOF
 </​code>​ </​code>​
-==== Install Cloudkeeper and Cloudkeeper-OS====+ 
 +==== Install Cloudkeeper and Cloudkeeper-OS ==== 
 On **egi-cloud.pd.infn.it** create a cloudkeeper user in keystone: On **egi-cloud.pd.infn.it** create a cloudkeeper user in keystone:
 <code bash> <code bash>
Line 517: Line 525:
 systemctl start cloudkeeper.timer systemctl start cloudkeeper.timer
 </​code>​ </​code>​
 +
 ==== Installing Squid for CVMFS (optional) ==== ==== Installing Squid for CVMFS (optional) ====
 +
 Install and configure squid on cloud-01 and cloud-02 for use from VMs (see https://​cvmfs.readthedocs.io/​en/​stable/​cpt-squid.html):​ Install and configure squid on cloud-01 and cloud-02 for use from VMs (see https://​cvmfs.readthedocs.io/​en/​stable/​cpt-squid.html):​
 <code bash> <code bash>
Line 560: Line 570:
 Actually, better to use already existing squids: ​ Actually, better to use already existing squids: ​
 CVMFS_HTTP_PROXY="​http://​squid-01.pd.infn.it:​3128|http://​squid-02.pd.infn.it:​3128"​ CVMFS_HTTP_PROXY="​http://​squid-01.pd.infn.it:​3128|http://​squid-02.pd.infn.it:​3128"​
 +
 +==== Local Accounting ====
 +A local accounting system based on Grafana, InfluxDB and Collectd has been set up following the instructions [[https://​docs.google.com/​document/​d/​1f-JcVShAhveYrgATdtLXcPFkQnVYgx_Eunqdy43kk48/​edit?​usp=sharing | here]].
 +
 ==== Local Monitoring ==== ==== Local Monitoring ====
 === Ganglia === === Ganglia ===
Line 566: Line 580:
   * Finally: systemctl enable gmond.service;​ systemctl start gmond.service   * Finally: systemctl enable gmond.service;​ systemctl start gmond.service
 === Nagios === === Nagios ===
-  * Install on compute nodes ncsa-client, nagios, nagios-plugins-disk,​ nagios-plugins-procs,​ nagios-plugins,​ nagios-common,​ nagios-plugins-load+  * Install on compute nodes nsca-client, nagios, nagios-plugins-disk,​ nagios-plugins-procs,​ nagios-plugins,​ nagios-common,​ nagios-plugins-load
  
   * Copy the file **cld-nagios:/​var/​spool/​nagios/​.ssh/​id_rsa.pub** in a file named **/​home/​nagios/​.ssh/​authorized_keys** of the controller and all compute nodes, and in a file named **/​root/​.ssh/​authorized_keys** of the controller. Be also sure that /​home/​nagios is the default directory in the /etc/passwd file.   * Copy the file **cld-nagios:/​var/​spool/​nagios/​.ssh/​id_rsa.pub** in a file named **/​home/​nagios/​.ssh/​authorized_keys** of the controller and all compute nodes, and in a file named **/​root/​.ssh/​authorized_keys** of the controller. Be also sure that /​home/​nagios is the default directory in the /etc/passwd file.
Line 572: Line 586:
   * Then do in all compute nodes:   * Then do in all compute nodes:
 <code bash> <code bash>
-$ echo encryption_method=1 > /​etc/​nagios/​send_nsca.cfg +$ echo encryption_method=1 ​>> /​etc/​nagios/​send_nsca.cfg 
-$ usermod -a -G libvirtd ​nagios+$ usermod -a -G libvirt ​nagios
 $ sed -i '​s|#​password=|password=NSCA_PASSWORD|g'​ /​etc/​nagios/​send_nsca.cfg $ sed -i '​s|#​password=|password=NSCA_PASSWORD|g'​ /​etc/​nagios/​send_nsca.cfg
 # then be sure the files below are in /​usr/​local/​bin:​ # then be sure the files below are in /​usr/​local/​bin:​
Line 649: Line 663:
     * check if port 8472 is open on the local firewall (it is used by linuxbridge vxlan networks)     * check if port 8472 is open on the local firewall (it is used by linuxbridge vxlan networks)
  
-  * in case of reboot of cloud-0* server (use IPMI if not reachable): all 3 interfaces must be up and the default destination must have both 192.168.114.1 ​and 192.168.115.1 gateways+  * in case of reboot of cloud-0* server (use IPMI if not reachable): all 3 interfaces must be up and the default destination must have 192.168.114.1 ​as gateway
     * check its network configuration     * check its network configuration
     * check if all partitions in /etc/fstab are properly mounted (do: $ df -h)     * check if all partitions in /etc/fstab are properly mounted (do: $ df -h)
progetti/cloud-areapd/egi_federated_cloud/rocky-centos7_testbed.1575471275.txt.gz · Last modified: 2019/12/04 14:54 by verlato@infn.it