Galera Cluster MariaDB Installation
Cluster details
We using 3 freshly deployed VMs running a minimal install of CentOS 7 x86_64
- Cluster node 1 has hostname HOST_G and IP address IP_HOST_G
- Cluster node 2 has hostname HOST_L and IP address IP_HOST_L
- Cluster node 3 has hostname HOST_F and IP address IP_HOST_F
The cluster is reachable throught the load balancer miracle.lnf.infn.it with ip address 193.206.84.215.
Add MariaDB Repositories
Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following command:
# cat > /etc/yum.repos.d/mariadb.repo
Insert the following directives:
# MariaDB 10.1 CentOS repository list - created 2015-10-14 13:45 UTC # http://mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/10.1/centos7-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1
terminate by the [Enter] key. To save and exit press [CTRL]+[d] keys.
Install MariaDB Galera Cluster
Install the MariaDB Galera Cluster by executing the following command on all nodes:
# yum install MariaDB-server MariaDB-client rsync galera
Start the mariadb service and enables the service to be started on boot:
# systemctl start mariadb # systemctl enable mariadb
Run the mysql_secure_installation script so we can improve the security. Run the following command on all nodes:
# /usr/bin/mysql_secure_installation
we don’t set the root password and we accept all defaults.
Create MariaDB Galera Cluster users
Now, we create the database user. The sst_user
is the user which a database node will use for authenticating to another database node in the State Transfer Snapshot (SST) phase.
On the HOST_G node run the following command:
# mysql mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_L>' IDENTIFIED BY [PASSWORD]; mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_F>' IDENTIFIED BY [PASSWORD]; mysql> GRANT ALL PRIVILEGES on *.* to sst_user; mysql> FLUSH PRIVILEGES; mysql> quit
On the HOST_L node run the following command:
# mysql mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_G>' IDENTIFIED BY [PASSWORD]; mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_F>' IDENTIFIED BY [PASSWORD]; mysql> GRANT ALL PRIVILEGES on *.* to sst_user; mysql> FLUSH PRIVILEGES; mysql> quit
Finally, on the HOST_F node run the following command:
# mysql mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_G>' IDENTIFIED BY [PASSWORD]; mysql> GRANT USAGE ON *.* to sst_user@'<IP_HOST_L>' IDENTIFIED BY [PASSWORD]; mysql> GRANT ALL PRIVILEGES on *.* to sst_user; mysql> FLUSH PRIVILEGES; mysql> quit
Create the MariaDB Galera Cluster config
First stop the mariadb services on all nodes:
# systemctl stop mariadb
Next, We are going to create the MariaDB Galera Cluster configuration by the following command on all nodes:
# vi /etc/my.cnf.d/server.cnf
Insert in the [mariadb-10.1] section the following directives:
binlog_format=ROW default-storage-engine=innodb innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=1M query_cache_type=OFF bind-address=0.0.0.0 datadir=/var/lib/mysql innodb_log_file_size=100M innodb_file_per_table innodb_flush_log_at_trx_commit=2 wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_address="gcomm://<IP_HOST_G>,<IP_HOST_L>,<IP_HOST_F>" wsrep_cluster_name='miracle' wsrep_node_address='<IP_HOST_G>' wsrep_node_name='<HOST_G>' wsrep_sst_method=rsync wsrep_sst_auth=sst_user:[PASSWORD] enforce_storage_engine='InnoDB' wsrep_on=ON sql_mode=''
Adjust the wsrep_node_address
and wsrep_node_name variables
on HOST_L and HOST_F node:
On HOST_L node:
wsrep_node_address='<IP_HOST_L>’ wsrep_node_name='<HOST_L>'
On HOST_F node:
wsrep_node_address='<IP_HOST_F>' wsrep_node_name='<HOST_F>'
Initialize the first cluster node HOST_G
Start MariaDB with the special ‐‐wsrep-new-cluster
option only on the HOST_G node:
# systemctl set-environment MYSQLD_OPTS="--wsrep-new-cluster" # systemctl start mariadb # systemctl unset-environment MYSQLD_OPTS
Check cluster status by run the following command on the node:
# mysql mysql > show status like 'wsrep%'
Add the other cluster nodes
Start MariaDB on the other cluster node executing the follow command:
# systemctl start mariadb
Add disk and create logical partitions
Using oVirt, add disk on each node of the cluster with the following size:
- HOST_G: 150GB (100GB to databases and 50GB to backup)
- HOST_L: 150GB (100GB to databases and 50GB to backup)
- HOST_F: 220GB (100GB to databases and 120GB to snapshot and backup)
You can find target partition name typing:
# cat /proc/partitions
(in this case /dev/vdb
)
First create a physical volumes that will be used to create the LVM:
# pvcreate /dev/vdb
Then create a volume group with name data
:
# vgcreate data /dev/vdb
On all nodes create a logical volume named db
on data
volume group specified dimension by 100GB
# lvcreate -L 100GB -n db data
Build XFS filesystem on /dev/data/db
# mkfs.xfs /dev/data/db
Stop mariadb service:
# systemctl stop mariadb
Mount /dev/data/db
# mount /dev/data/db /mnt
Move all files and folders in /var/lib/mysql/
to /mnt/
# mv /var/lib/mysql/* /mnt/
Make the user mysql
and the group mysql
the owning user and owning group of the directory /mnt/
# chown mysql:mysql /mnt/
Umount /mnt/
directory
# umount /mnt/
Change /etc/fstab
file
# vi /etc/fstab
append the following content:
/dev/data/db /var/lib/mysql xfs noatime 0 0
NOTE: The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read.
Mount all filesystems mentioned in fstab:
# mount -a
Start mariadb service:
# systemctl start mariadb
Logical partition to backup and snapshot
HOST_F is the node designed to backup. The 220GB are assigned as follows:
- 100GB to database
- 100GB to databases backup
- 10GB to snapshot
- 10GB can be used to extend previous logical volumes
The firsts 100GB are assigned to the databases as shown in the previous section.
If HOST_F is down, HOST_L (or HOST_G) is allowed to become a responsable node to backup. So we prepare 50GB to database backup on HOST_G and HOST_L and 100GB on HOST_F.
Create a logical volume named backup on data volume group with 100GB size on HOST_F:
# lvcreate -L 100GB -n backup data
and with max available free size to HOST_G anf HOST_L:
# lvcreate -l 100%FREE -n backup data
Build XFS filesystem on /dev/data/db
# mkfs.xfs /dev/data/backup
Create /backup/mysqldumps
directories
# mkdir -p /backup/mysqldumps
Only on HOST_F create a snapshot logical volume that is 10GB in size named /dev/data/db-snap
of the origin logical volume named /dev/data/db
.
# lvcreate -L 10G -n db-snap -s /dev/data/db
Create /backup/db-snap
directory
# mkdir /backup/db-snap
Append the following content to /etc/fstab
file:
/dev/data/backup /backup xfs defaults 0 0
On HOST_F append also:
/dev/data/db-snap /backup/db-snap xfs ro,nouuid 0 0
Mount all filesystems mentioned in fstab:
# mount -a