User Tools

Site Tools


progetti:cloud-areapd:foreman_-_tool_install_and_config_management

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
progetti:cloud-areapd:foreman_-_tool_install_and_config_management [2013/11/25 07:44] – [Testbed description] aiftim@infn.itprogetti:cloud-areapd:foreman_-_tool_install_and_config_management [2013/12/09 08:57] (current) – [Testbed description] aiftim@infn.it
Line 1: Line 1:
 +====== Foreman - tool install & config management ======
  
 +Page containing our experiences regarding testing, and eventually a production deployment, of an installation and configuration management service based on [[http://theforeman.org/ | Foreman]] and [[http://www.puppetlabs.com/ | Puppet]]
 +
 +===== Advices & (external) experiences regarding deployment =====
 +
 +  * from **Steve Traylen (CERN)**:
 +
 +  We run separate nodes for Puppet and Foreman and I would recommend simply because it's simpler, they scale at different rates.
 +  We are currently running 6 or so Puppet masters and 6 or so Foreman nodes. Both are accessed behind two load balanced Apache mod_balance nodes. 
 +  We can hopefully add new nodes to the backend and frontend as required.
 +  Having said that we did run originally all on one node. It works, we just wanted greater resilience. 
 +  I think we got to about 1000 hosts running puppet once per hour before we started to hit problems. The node was 4 core 2 year old batch server. 
 +  Memory is not a huge problem, you run out CPU before anything else.
 +  We have always run the foreman mysql on separate hardware since CERN offers a service for this.
 +  We also run 3 types of foreman nodes where we split traffic based on the URL. In particular we had a problem where puppet reports being uploaded 
 +  to foreman killed the foreman service so the web interface became unresponsive to users. 
 +  Splitting up foremans by functions has resolved this.
 +  Clearly what we have above is pretty complicated and comes after nearly 2 years and probably 4 evolutions of the service.
 +  I would start with two boxes, one for foreman and one for puppet and then as quickly as possible plan to get a Apache mod_balance (or HA Proxy) 
 +  in front or even start with this. All are puppet and foreman backends run passenger to make them multi threaded.
 +  In addition we have two very high quality brand new hardware boxes for puppetdb and postgres. .. PuppetDB is where we are currently struggling 
 +  with scaling for sure. We are actively working with puppetlabs here to reduce the  load that PuppetDB generates... This is making progress but 
 +  still getting this to more than one node currently has no obvious solution.
 +
 +
 +  * from **Frederic Shaer (CEA.FR)** (see complete details in {{:progetti:cloud-areapd:schaer_foreman.pdf|}}:
 +
 +  Our config is :
 +  - One node, SL6.2, serving as
 +  o   Puppetmaster running under passenger (with a recompiled ruby version, see another mail of mine in this thread about that)
 +  o   Foreman server
 +  o   Git (gitolite) server
 +  o   CA server (the puppet CA)
 +  o   Puppet fileserver
 +  o   HTTP/Yum  mirror for our OSes, epel and any other repositories we might use (and one apt mirror, for our single debian managed host)
 +  o   Disk is only relevant for mirrors  in our case : the mysql DB is 500MB big after a few weeks, because of reports but these can/are cleaned up
 +  - The nodes specs (now):
 +  o   Xeon E5-2640 0 @ 2.50GHZ (12 phycical cores)
 +  o   32GB mem. 27GB free…
 +  o   2TB (raid6) data filesystem, 185G used.
 +  
 +  Foreman acts as a web frontend, and connects to “foreman proxy” daemons to run his actions, so in case of overload, you can probably split the 
 +  puppetmaster/tftp/dhcp/foreman server (the proxy must run on the puppetmaster), but these things are a bit sensitive to set up...
 +  But for a beginning, I would advise to run both/all on same host, as this surely will rule out “same host assumptions” in the code/config. 
 +  
 +
 +  * from **Jan Engels (DESY)** ([[https://indico.cern.ch/getFile.py/access?contribId=17&sessionId=6&resId=0&materialId=slides&confId=247864 | hepix_talk]]):
 +
 +  At DESY we have 2 dedicated puppet masters and 2 infrastructure servers which we share between PuppetDB, Foreman and GitLab. 
 +  As a database backend we use a 2 node postgreSQL cluster. You can find more infos on my talk (link above) at the last hepix
 +  With our current setup we are managing ~ 500 clients and the load on the puppet master servers is still very low (load average of ~ 3).
 +  We are also using passenger which seems to be the highly recommended way of using puppet in production environments. 
 +  
 +
 +
 +===== First Tests =====
 +
 +
 +==== Testbed description ====
 +
 +  * one VM-node hosting Foreman+PuppetMaster
 +    * cert-23 (Foreman/PuppeMaster)
 +      * VM on cream-mstr-031
 +      * SL6.4/x86_64
 +      * 2 vNIC: eth0 (pub.), eth1 (priv.)
 +      * 2 CPU, 3G RAM, VDisk: 50GB
 +  * three VM-node hosting OpenStack all-in-one tests
 +    * cert-30 (OpenStack all-in-one through Foreman
 +      * VM on cert-38
 +      * SL6.4/x86_64
 +      * 1 vNIC: eth0 (pub.)
 +      * 2 CPU, 3G RAM, VDisk: 25GB
 +    * cert-33 (OpenStack all-in-one through Foreman (1st test) & Foreman/PuppetMaster (2nd test)
 +      * VM on cert-03
 +      * SL6.4/x86_64
 +      * 1 vNIC: eth0 (pub.)
 +      * 2 CPU, 3G RAM, VDisk: 25GB
 +    * cert-40 (OpenStack all-in-one through PackStack)
 +      * VM on cert-38
 +      * SL6.4/x86_64
 +      * 1 vNIC: eth0 (pub.)
 +      * 2 CPU, 4G RAM, VDisk: 60GB
 +
 +==== Testbed Installation & Configuration ====
 +
 +=== First scenario - test Foreman non-provisioning & all-in-one OpenStack ===
 +
 +  * Installation & Configuration **Foreman** - **cert-23**:
 +    * scenario - **non-provisioning-mode** - the nodes installed through Foreman have to already up&runnning (OS-level)
 +    * using **openstack-foreman-installer**, following [[http://openstack.redhat.com/Deploying_RDO_Using_Foreman | Deploying_RDO_Using_Foreman]]
 +    * foreman "dashboard": http://cert-23.pd.infn.it (user: admin, pass: PD "normal" password, or user "all", pass:... (available on request :-)). Accesible from "PD" area
 +    * installation & configuration log for the Foreman/PuppetMaster server cand be found at: {{:progetti:cloud-areapd:foreman_log.txt|}}
 +
 +  * Configuration **Openstack all-in-one** instance through **Foreman** - **cert-30**:
 +    * dashboard available at: http://cert-30.pd.infn.it
 +    * log file available at: {{:progetti:cloud-areapd:cert-30_log.txt|}}, old test ->{{:progetti:cloud-areapd:cert-33_log.txt|}}
 +    * after correctly setting network parameters (see log above) configuration went ok
 +    * checks on the node: {{:progetti:cloud-areapd:cert-30_checks.txt|}}
 +
 +  * Configuration **Openstack all-in-one** instance through **PackStack** - **cert-40**:
 +    * dashboard available at: http://cert-40.pd.infn.it, user: admin, pass: (see keystonerc_admin file)
 +    * log file available at: {{:progetti:cloud-areapd:cert-40_log.txt|}}
 +    * to use to compare results, puppet configurations between packstack and foreman.
 +
 +=== Second scenario - test Foreman provisioning & 2-nodes OpenStack ===
 +
 +  * Installation & Configuration **Foreman** - **cert-33**:
 +    * using **foreman-installer**, following [[http://theforeman.org/manuals/1.3/index.html#3.InstallingForeman | Foreman 1.3 Manual]]
 +      * log file available at {{TOADD}}
 +    * TODO: install openstack-controller on XXXX & openstack-compute on cert-31
 +
 +===== Pre-Production =====
 +
 +==== Testbed description ====
 +  * cert-01 - Foreman + PuppetMaster with MySQL backend: [[https://cert-01.pd.infn.it/|]] - no more available it is the new cld-foreman, production node.
 +    * bare-metal
 +    * SL6/x86_64
 +    * 2 NIC: eth0 (pub, 193.206.210.137), eth1 (priv)
 +    * 2 Quad-Core AMD Opteron(tm) 2360 SE, 2.5GZ, 2 Disks 80GB in RAID1
 +  * cert-06 (....)
 +    * bare-metal
 +    * SL6/x86_64
 +    * 2 NIC: eth0 (pub.), eth1 (priv., non configured)
 +    * 2 Xeon 2.80GHz, 2G RAM, VDisk: 50GB
 +
 +==== Testbed Installation & Configuration ====
 +
 +  * Log of installation & configuration of **cert-01** - **Foreman serverr + PuppetMaster + TFTP + DHCP**: {{:progetti:cloud-areapd:installcert-01.txt|}}
 +
 +
 +
 +===== Production =====
 +
 +  * Setup: **Foreman + Puppet serrver - cld-foreman.cloud.pd.infn.it**
 +
 +  
 +  nslookup cld-foreman.cloud.pd.infn.it
 +  Server:        127.0.0.1
 +  Address:    127.0.0.1#53
 +  
 +  Name:    cld-foreman.cloud.pd.infn.it
 +  Address: 192.168.60.31 
 +
 +
 +  * Installation & configuration log
 +     * see {{:progetti:cloud-areapd:cld-foreman_configuration.txt|}}
 +     * User admin, passwd: normal grid root-passwd
 +  *  **NO** direct connection to the web interface:
 +
 +    https://cld-foreman.cloud.pd.infn.it
 +    Unable to connect
 +    Firefox can't establish a connection to the server at cld-foreman.cloud.pd.infn.it.
 +
 +     * connecting through tunnel - from a desktop **"inside"** PD-network
 +  <code>
 +  - on cld-foreman:
 +  # ssh -R 6333:localhost:80 user@<inside_desktop>.pd.infn.it
 +  -  on <inside_desktop> atart firefox with "localhost:6333"
 +  </code>
 +     * connecting through tunnel - from a computer **"outside"** PD-network
 +  <code>
 +  - on cld-foreman:
 +  # ssh -R 6333:localhost:80 <user>@gate.pd.infn.it
 +  - on external computer:
 +  # ssh -L 5900:localhost:6333 <user>@gate.pd.infn.it
 +  - on external computer start firefox with "localhost:5900"
 +  </code>
 +
 + ------
 + 
 +//[[Cristina.Aiftimiei@pd.infn.it|Doina Cristina Aiftimiei]], [[Sergio.Traldi@pd.infn.it|Sergio Traldi]] 2013/11/20 08:50//

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki