User Tools

Site Tools


strutture:lnf:dr:calcolo:sistemi:galera_custer_installation

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
strutture:lnf:dr:calcolo:sistemi:galera_custer_installation [2015/11/26 17:59] rorru@infn.itstrutture:lnf:dr:calcolo:sistemi:galera_custer_installation [2023/07/19 09:23] (current) rorru@infn.it
Line 1: Line 1:
 +===== Galera Cluster MariaDB Installation =====
  
 +{{:strutture:lnf:dr:calcolo:sistemi:screenshot_2023-07-19_at_11.20.59.png?direct&600|}}
 +
 +=== Cluster details ===
 +We using 3 freshly deployed VMs running a minimal install of **CentOS 7** x86_64
 +
 +  * Cluster node 1 has hostname **HOST_G** and IP address **IP_HOST_G**
 +  * Cluster node 2 has hostname **HOST_L** and IP address **IP_HOST_L**
 +  * Cluster node 3 has hostname **HOST_F** and IP address **IP_HOST_F**
 +
 +The cluster is reachable throught the load balancer miracle.lnf.infn.it with ip address 193.206.84.215.
 +
 +=== Add MariaDB Repositories ===
 +Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following command:
 +
 +<code>
 +# cat > /etc/yum.repos.d/mariadb.repo
 +</code>
 +
 +Insert the following directives:
 +
 +<code>
 +# MariaDB 10.1 CentOS repository list - created 2015-10-14 13:45 UTC
 +# http://mariadb.org/mariadb/repositories/
 +[mariadb]
 +name = MariaDB
 +baseurl = http://yum.mariadb.org/10.1/centos7-amd64
 +gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 +gpgcheck=1
 +</code>
 +
 +terminate by the [Enter] key. To save and exit press [CTRL]+[d] keys.  
 +
 +=== Install MariaDB Galera Cluster ===
 +
 +Install the MariaDB Galera Cluster by executing the following command on all nodes:
 +
 +<code>
 +# yum install MariaDB-server MariaDB-client rsync galera
 +</code>
 +
 +Start the mariadb service and enables the service to be started on boot: 
 +
 +<code>
 +# systemctl start mariadb
 +# systemctl enable mariadb
 +</code>
 +
 +Run the mysql_secure_installation script so we can improve the security. Run the following command on all nodes:
 +
 +<code>
 +# /usr/bin/mysql_secure_installation
 +</code>
 +
 +we don’t set the root password and we accept all defaults.
 +
 +=== Create MariaDB Galera Cluster users ===
 +
 +Now, we create the  database user. The ''sst_user'' is the user which a database node will use for authenticating to another database node in the State Transfer Snapshot (SST) phase. 
 +
 +On the **HOST_G** node run the following command:
 +<code>
 +# mysql
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_L>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_F>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT ALL PRIVILEGES on *.* to sst_user;
 +mysql> FLUSH PRIVILEGES;
 +mysql> quit
 +</code>
 +
 +On the **HOST_L** node run the following command:
 +<code>
 +# mysql
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_G>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_F>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT ALL PRIVILEGES on *.* to sst_user;
 +mysql> FLUSH PRIVILEGES;
 +mysql> quit
 +</code>
 +
 +Finally, on the **HOST_F** node run the following command:
 +<code>
 +# mysql
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_G>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_L>' IDENTIFIED BY [PASSWORD];
 +mysql> GRANT ALL PRIVILEGES on *.* to sst_user;
 +mysql> FLUSH PRIVILEGES;
 +mysql> quit
 +</code>
 +
 +=== Create the MariaDB Galera Cluster config ===
 +First stop the mariadb services on all nodes:
 +
 +<code>
 +# systemctl stop mariadb
 +</code>
 +
 +Next, We are going to create the MariaDB Galera Cluster configuration by the following command on all nodes:
 +
 +<code>
 +# vi /etc/my.cnf.d/server.cnf
 +</code>
 +
 +Insert in the [mariadb-10.1] section the following directives:
 +
 +<code>
 +binlog_format=ROW
 +default-storage-engine=innodb
 +innodb_autoinc_lock_mode=2
 +innodb_locks_unsafe_for_binlog=1
 +query_cache_size=1M
 +query_cache_type=OFF
 +bind-address=0.0.0.0
 +datadir=/var/lib/mysql
 +innodb_log_file_size=100M
 +innodb_file_per_table
 +innodb_flush_log_at_trx_commit=2
 +wsrep_provider=/usr/lib64/galera/libgalera_smm.so
 +wsrep_cluster_address="gcomm://<IP_HOST_G>,<IP_HOST_L>,<IP_HOST_F>"
 +wsrep_cluster_name='miracle'
 +wsrep_node_address='<IP_HOST_G>'
 +wsrep_node_name='<HOST_G>'
 +wsrep_sst_method=rsync
 +wsrep_sst_auth=sst_user:[PASSWORD]
 +enforce_storage_engine='InnoDB'
 +wsrep_on=ON
 +sql_mode=''
 +</code>
 +
 +Adjust the ''wsrep_node_address'' and ''wsrep_node_name variables'' on HOST_L and HOST_F node:
 +
 +On **HOST_L** node:
 +<code>
 +wsrep_node_address='<IP_HOST_L>
 +wsrep_node_name='<HOST_L>'
 +</code>
 +
 +On **HOST_F** node:
 +<code>
 +wsrep_node_address='<IP_HOST_F>'
 +wsrep_node_name='<HOST_F>'
 +</code>
 +
 +=== Initialize the first cluster node HOST_G ===
 +
 +Start MariaDB with the special ''‐‐wsrep-new-cluster'' option only on the HOST_G node:
 +
 +<code>
 +# systemctl set-environment MYSQLD_OPTS="--wsrep-new-cluster"
 +# systemctl start mariadb
 +# systemctl unset-environment MYSQLD_OPTS
 +</code>
 +
 +Check cluster status by run the following command on the node:
 +
 +<code>
 +# mysql
 +mysql > show status like 'wsrep%'
 +</code>
 +
 +=== Add the other cluster nodes ===
 +Start MariaDB on the other cluster node executing the follow command:
 +
 +<code>
 +# systemctl start mariadb
 +</code>
 +
 +=== Add disk and create logical partitions ===
 +Using oVirt, add disk on each node of the cluster with the following size:
 +  * HOST_G: 150GB (100GB to databases and 50GB to backup)
 +  * HOST_L: 150GB (100GB to databases and 50GB to backup)
 +  * HOST_F: 220GB (100GB to databases and 120GB to snapshot and backup)
 +
 +You can find target partition name typing:
 +
 +<code>
 +# cat /proc/partitions
 +</code>
 +
 +(in this case ''/dev/vdb'')
 +
 +First create a physical volumes that will be used to create the LVM:
 +
 +<code>
 +# pvcreate /dev/vdb
 +</code>
 +
 +Then create a volume group with name ''data'':
 +<code>
 +# vgcreate data /dev/vdb
 +</code>
 +
 +On all nodes create a logical volume named ''db'' on ''data'' volume group specified dimension by 100GB
 +
 +<code>
 +# lvcreate -L 100GB -n db data
 +</code>
 +
 +Build XFS filesystem on ''/dev/data/db''
 +
 +<code>
 +# mkfs.xfs /dev/data/db
 +</code>
 +
 +Stop mariadb service:
 +
 +<code>
 +# systemctl stop mariadb
 +</code>
 +
 +Mount ''/dev/data/db''
 +
 +<code>
 +# mount /dev/data/db /mnt
 +</code>
 +
 +Move all files and folders in ''/var/lib/mysql/'' to ''/mnt/''
 +
 +<code>
 +# mv /var/lib/mysql/* /mnt/
 +</code>
 +
 +Make the user ''mysql'' and the group ''mysql'' the owning user and owning group of the directory ''/mnt/''
 +
 +<code>
 +# chown mysql:mysql /mnt/
 +</code>
 +
 +Umount ''/mnt/'' directory
 +
 +<code>
 +# umount /mnt/
 +</code>
 +
 +Change ''/etc/fstab'' file
 +
 +<code>
 +# vi /etc/fstab
 +</code>
 +
 +append the following content:
 +
 +<code>
 +/dev/data/db /var/lib/mysql xfs noatime 0 0
 +</code>
 +
 +NOTE: The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read.
 +
 +Mount all filesystems mentioned in fstab:
 +
 +<code>
 +# mount -a
 +</code>
 +
 +Start mariadb service:
 +
 +<code>
 +# systemctl start mariadb
 +</code>
 +
 +== Logical partition to backup and snapshot ==
 +
 +HOST_F is the node designed to backup. The 220GB are assigned as follows:
 +  * 100GB to database
 +  * 100GB to databases backup
 +  * 10GB to snapshot
 +  * 10GB can be used to extend previous logical volumes
 +
 +The firsts 100GB are assigned to the databases as shown in the previous section.
 +
 +
 +
 +If HOST_F is down, HOST_L (or HOST_G) is allowed to become a responsable node to backup. So we prepare 50GB to database backup on HOST_G and HOST_L and 100GB on HOST_F.
 +
 +Create a logical volume named **backup** on **data** volume group with 100GB size on HOST_F:
 +
 +<code>
 +# lvcreate -L 100GB -n backup data
 +</code>
 +
 +and with max available free size to HOST_G anf HOST_L:
 +
 +<code>
 +# lvcreate -l 100%FREE -n backup data
 +</code>
 +
 +Build XFS filesystem on ''/dev/data/db''
 +
 +<code>
 +# mkfs.xfs /dev/data/backup
 +</code>
 +
 +Create ''/backup/mysqldumps'' directories
 +
 +<code>
 +# mkdir -p /backup/mysqldumps
 +</code>
 +
 +Only on HOST_F create a snapshot logical volume that is 10GB in size named ''/dev/data/db-snap'' of the origin logical volume named ''/dev/data/db''.
 +
 +<code>
 +# lvcreate -L 10G -n db-snap -s /dev/data/db
 +</code>
 +
 +Create ''/backup/db-snap'' directory
 +
 +<code>
 +# mkdir /backup/db-snap
 +</code>
 +
 +Append the following content to ''/etc/fstab'' file:
 +
 +<code>
 +/dev/data/backup /backup xfs defaults 0 0
 +</code>
 +
 +On HOST_F append also:
 +
 +<code>
 +/dev/data/db-snap /backup/db-snap xfs ro,nouuid 0 0
 +</code>
 +
 +Mount all filesystems mentioned in fstab:
 +
 +<code>
 +# mount -a
 +</code>
 +
 +
 +
 +=== Backup ===

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki