User Tools

Site Tools


Sidebar

progetti:cloud-areapd:foreman_-_tool_install_and_config_management

Foreman - tool install & config management

Page containing our experiences regarding testing, and eventually a production deployment, of an installation and configuration management service based on Foreman and Puppet

Advices & (external) experiences regarding deployment

  • from Steve Traylen (CERN):
We run separate nodes for Puppet and Foreman and I would recommend simply because it's simpler, they scale at different rates.
We are currently running 6 or so Puppet masters and 6 or so Foreman nodes. Both are accessed behind two load balanced Apache mod_balance nodes. 
We can hopefully add new nodes to the backend and frontend as required.
Having said that we did run originally all on one node. It works, we just wanted greater resilience. 
I think we got to about 1000 hosts running puppet once per hour before we started to hit problems. The node was 4 core 2 year old batch server. 
Memory is not a huge problem, you run out CPU before anything else.
We have always run the foreman mysql on separate hardware since CERN offers a service for this.
We also run 3 types of foreman nodes where we split traffic based on the URL. In particular we had a problem where puppet reports being uploaded 
to foreman killed the foreman service so the web interface became unresponsive to users. 
Splitting up foremans by functions has resolved this.
Clearly what we have above is pretty complicated and comes after nearly 2 years and probably 4 evolutions of the service.
I would start with two boxes, one for foreman and one for puppet and then as quickly as possible plan to get a Apache mod_balance (or HA Proxy) 
in front or even start with this. All are puppet and foreman backends run passenger to make them multi threaded.
In addition we have two very high quality brand new hardware boxes for puppetdb and postgres. .. PuppetDB is where we are currently struggling 
with scaling for sure. We are actively working with puppetlabs here to reduce the  load that PuppetDB generates... This is making progress but 
still getting this to more than one node currently has no obvious solution.
Our config is :
- One node, SL6.2, serving as
o   Puppetmaster running under passenger (with a recompiled ruby version, see another mail of mine in this thread about that)
o   Foreman server
o   Git (gitolite) server
o   CA server (the puppet CA)
o   Puppet fileserver
o   HTTP/Yum  mirror for our OSes, epel and any other repositories we might use (and one apt mirror, for our single debian managed host)
o   Disk is only relevant for mirrors  in our case : the mysql DB is 500MB big after a few weeks, because of reports but these can/are cleaned up
- The nodes specs (now):
o   Xeon E5-2640 0 @ 2.50GHZ (12 phycical cores)
o   32GB mem. 27GB free…
o   2TB (raid6) data filesystem, 185G used.

Foreman acts as a web frontend, and connects to “foreman proxy” daemons to run his actions, so in case of overload, you can probably split the 
puppetmaster/tftp/dhcp/foreman server (the proxy must run on the puppetmaster), but these things are a bit sensitive to set up...
But for a beginning, I would advise to run both/all on same host, as this surely will rule out “same host assumptions” in the code/config. 
At DESY we have 2 dedicated puppet masters and 2 infrastructure servers which we share between PuppetDB, Foreman and GitLab. 
As a database backend we use a 2 node postgreSQL cluster. You can find more infos on my talk (link above) at the last hepix
With our current setup we are managing ~ 500 clients and the load on the puppet master servers is still very low (load average of ~ 3).
We are also using passenger which seems to be the highly recommended way of using puppet in production environments. 

First Tests

Testbed description

  • one VM-node hosting Foreman+PuppetMaster
    • cert-23 (Foreman/PuppeMaster)
      • VM on cream-mstr-031
      • SL6.4/x86_64
      • 2 vNIC: eth0 (pub.), eth1 (priv.)
      • 2 CPU, 3G RAM, VDisk: 50GB
  • three VM-node hosting OpenStack all-in-one tests
    • cert-30 (OpenStack all-in-one through Foreman
      • VM on cert-38
      • SL6.4/x86_64
      • 1 vNIC: eth0 (pub.)
      • 2 CPU, 3G RAM, VDisk: 25GB
    • cert-33 (OpenStack all-in-one through Foreman (1st test) & Foreman/PuppetMaster (2nd test)
      • VM on cert-03
      • SL6.4/x86_64
      • 1 vNIC: eth0 (pub.)
      • 2 CPU, 3G RAM, VDisk: 25GB
    • cert-40 (OpenStack all-in-one through PackStack)
      • VM on cert-38
      • SL6.4/x86_64
      • 1 vNIC: eth0 (pub.)
      • 2 CPU, 4G RAM, VDisk: 60GB

Testbed Installation & Configuration

First scenario - test Foreman non-provisioning & all-in-one OpenStack

  • Installation & Configuration Foreman - cert-23:
    • scenario - non-provisioning-mode - the nodes installed through Foreman have to already up&runnning (OS-level)
    • using openstack-foreman-installer, following Deploying_RDO_Using_Foreman
    • foreman "dashboard": http://cert-23.pd.infn.it (user: admin, pass: PD "normal" password, or user "all", pass:… (available on request :-)). Accesible from "PD" area
    • installation & configuration log for the Foreman/PuppetMaster server cand be found at: foreman_log.txt
  • Configuration Openstack all-in-one instance through PackStack - cert-40:
    • dashboard available at: http://cert-40.pd.infn.it, user: admin, pass: (see keystonerc_admin file)
    • log file available at: cert-40_log.txt
    • to use to compare results, puppet configurations between packstack and foreman.

Second scenario - test Foreman provisioning & 2-nodes OpenStack

  • Installation & Configuration Foreman - cert-33:
    • using foreman-installer, following Foreman 1.3 Manual
      • log file available at toadd
    • TODO: install openstack-controller on XXXX & openstack-compute on cert-31

Pre-Production

Testbed description

  • cert-01 - Foreman + PuppetMaster with MySQL backend: https://cert-01.pd.infn.it/ - no more available it is the new cld-foreman, production node.
    • bare-metal
    • SL6/x86_64
    • 2 NIC: eth0 (pub, 193.206.210.137), eth1 (priv)
    • 2 Quad-Core AMD Opteron™ 2360 SE, 2.5GZ, 2 Disks 80GB in RAID1
  • cert-06 (….)
    • bare-metal
    • SL6/x86_64
    • 2 NIC: eth0 (pub.), eth1 (priv., non configured)
    • 2 Xeon 2.80GHz, 2G RAM, VDisk: 50GB

Testbed Installation & Configuration

  • Log of installation & configuration of cert-01 - Foreman serverr + PuppetMaster + TFTP + DHCP: installcert-01.txt

Production

  • Setup: Foreman + Puppet serrver - cld-foreman.cloud.pd.infn.it
nslookup cld-foreman.cloud.pd.infn.it
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    cld-foreman.cloud.pd.infn.it
Address: 192.168.60.31 
  • Installation & configuration log
  • NO direct connection to the web interface:
  https://cld-foreman.cloud.pd.infn.it
  Unable to connect
  Firefox can't establish a connection to the server at cld-foreman.cloud.pd.infn.it.
  • connecting through tunnel - from a desktop "inside" PD-network
  - on cld-foreman:
  # ssh -R 6333:localhost:80 user@<inside_desktop>.pd.infn.it
  -  on <inside_desktop> atart firefox with "localhost:6333"
  
  • connecting through tunnel - from a computer "outside" PD-network
  - on cld-foreman:
  # ssh -R 6333:localhost:80 <user>@gate.pd.infn.it
  - on external computer:
  # ssh -L 5900:localhost:6333 <user>@gate.pd.infn.it
  - on external computer start firefox with "localhost:5900"
  

Doina Cristina Aiftimiei, Sergio Traldi 2013/11/20 08:50

progetti/cloud-areapd/foreman_-_tool_install_and_config_management.txt · Last modified: 2013/12/09 08:57 by aiftim@infn.it