User Tools

Site Tools


progetti:cloud-areapd:testplan

TestPlan

System Tests

Basic functionality tests:

Gluster server or NFS server tests:

1.1 Bricks or export

Gluster

The brick is the storage filesystem that has been assigned to a volume. To check the bricks from one each servers host you have to ensure there is a device proper mounted in a path with xfs or ext4 filesystem.

<code_bash>

 df -hT

</code> There shoud be one or more lines like (one each brick to put in gluster cluster): <code_bash>

 ...
 /dev/mapper/mpathc                 xfs             2.2T  1G  2.2T   1% /brick-nova
 ...

</code>

NFS

The exports is the directory that will be exported to remote host. Check the is a directory mounted and it is inserted in /etc/exports. <code_bash>

 df -hT

</code> There shoud be one or more lines like (one each brick to put in gluster cluster): <code_bash>

 ...
 /dev/mapper/mpathc                 xfs             2.2T  1G  2.2T   1% /nfs-path
...

</code>

1.2 Glusterd o NFS status

Gluster

Check the glusterd status also with a ps -ef or ps aux command and ensure the gluster is running. <code_bash>

 service glusterd status
 or
 /bin/systemctl status  glusterd.service
 
 ps -ef | grep gluster

</code>

NFS

Check all the NFS cluster status also with a ps -ef or ps aux command and ensure the NFS is running. <code_bash>

 service nfs status
 service pcsd status
 ...
 or
 /bin/systemctl status  nfs.service
 /bin/systemctl status  pcsd.service
 ....

ps -ef | grep nfs

 ps -ef | grep pcsd
 ps -ef | grep iscsi
 ...

</code>

1.3 Peers or Cluster node status

Gluster

Check the peer status of the gluster servers in the cluster. <code_bash>

 gluster peer status

</code>

NFS

Check the cluster NFS status of the NFS servers in the cluster.

1.4 Volume or LVM status

Gluster

After the volume creation and after the volume has been started check with gluster command the volume status. <code_bash>

 gluster volume status <volume_name>

</code>

NFS

After the LVM creation and check with LVM command the volume status. <code_bash>

 lvm vgdisplay
 lvm lvdisplay

</code>

1.5 Volume or LVM info

Gluster

After the volume creation and after the volume has been started check with gluster command the volume info. <code_bash>

 gluster volume info <volume_name>

</code>

NFS

After the LVM creation and check with LVM command the volume status. <code_bash>

 lvm vgdisplay
 lvm lvdisplay

</code>

Tests from clients:

2.1 Mount of a volume

2.1.1 Mount the gluster or NFS volume to a mount point from one of the servers in the gluster cluster.

Gluster

<code_bash>

 mount -t glusterfs <ip_gluster_server_1>:<volume_name> /mnt

</code>

NFS

<code_bash>

 mount -t nfd <ip_vip_cluster>:<exports> /mnt

</code>

2.1.2 Umount the volume from client. Stop NFS or glusterd process to server_1 mount the NFS or gluster volume to a mount point using gluster_server_1 or vip

Gluster

<code_bash>

 mount -t glusterfs <gluster_server_1>:<volume_name> /mnt

</code>

NFS

<code_bash>

 mount -t nfs <ip_vip_cluster>:<exports> /mnt

</code>

2.1.3 Start glusterd process from gluster_server_1 and stop glusterd process to gluster_server_2 mount the gluster volume to a mount point from gluster_server_2

<code_bash>

 mount -t glusterfs <gluster_server_2>:<volume_name> /mnt

</code>

Not need in NFS using VIP. Restart all glusterd or NFS process and mount the gluster volume to a path like /mnt

2.2 Write a single file

Gluster

Write a file in the gluster volume just mounted. (the file created will be of 1GB) <code_bash>

 dd if=/dev/zero of=/mnt/testFile1 bs=1M count=1024 conv=fdatasync | md5sum

</code>

NFS

Write a file in the NFS volume just mounted. (the file created will be of 1GB) <code_bash>

 dd if=/dev/zero of=/mnt/testFile1 bs=1M count=1024 conv=fdatasync | md5sum

</code>

2.3 Read a single file

Gluster

Read a file in the gluster volume just mounted. <code_bash>

 head /mnt/testFile1
 md5sum /mnt/testFile1

</code>

NFS

Read a file in the NFS volume just mounted. <code_bash>

 head /mnt/testFile1
 md5sum /mnt/testFile1

</code>

2.4 Write on a volume concurrently from more clients

Mount the NFS or gluster volume from 2 clients write two file concurrently in the NFS or gluster volume. Change the X value to 1 or 2 in <client_X> in the command below From each clients exec: (the file created will be of 10GB) <code_bash>

 dd if=/dev/zero of=/mnt/testFile_<client_X>_1 bs=100M count=1024 conv=fdatasync | md5sum

</code>

2.5 Read on a volume concurrently from more clients

Mount the NFS or gluster volume from 2 clients read the same file concurrently in the NFS or gluster volume. From each clients exec: <code_bash>

 cat /mnt/testFile1_client_1

</code>

2.6 Write and read on a volume concurrently from more clients

Mount the NFS or gluster volume from 2 clients write two file concurrently in the NFS or gluster volume. And read the file while they are writing data. Change the X value to 1 or 2 in <client_X> in the command below From each clients exec: (the file created will be of 10GB) Please open two shell each clients: <code_bash>

 dd if=/dev/zero of=/mnt/testFile_<client_X>_1 bs=100M count=1024 conv=fdatasync 

</code> On the other shell: <code_bash>

 tail -f /mnt/testFile_<client_X>_1 

</code>

2.7 From one client write a big file and stop NFS/pcsd or glusterd process from one server

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the NFS or gluster volume. While the file is writing data stop NFS/pcsd or glusterd process from one server. Check with ps -ef in the server there are no NFS/pcsd or gluster process (if there are glusterfsd process, try to leave the process alive or try in second instance to kill that process)

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From server gluster or NFS

While the file copying is running

Gluster

<code_bash>

service glusterd stop
ps -ef | grep -i gluster

</code>

NFS

<code_bash>

service nfs stop
service pcsd stop
ps -ef | grep -i nfs
ps -ef | grep -i pcsd

</code>

If there are process NFS/pcsd or gluster running try to:

2.7.1 leave NFS/pcsd or glusterfsd process alive
2.7.2 kill NFS/pcsd or glusterfsd process

2.8 From one client write a big file and stop NFS/pcsd or glusterd process from other server

Same test 2.7 but stopping NFS/pcsd or glusterd process from server_2

2.9 From one client write a big file and kill all NFS/pcsd or glusterd process from one server

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the NFS or gluster volume. While the file is writing data kill -9 NFS/pcsd or glusterd process from one server.

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From server gluster or NFS

While the file copying is running

<code_bash>

for PIDS in `ps -ef | grep -i -e nfs -e pcsd -e gluster | awk '{ print $1;}'`; do
   kill -9 $PIDS;
 done 

</code>

2.10 From one client write a big file and kill all service processes on other server

2.11 From one client write a big file and shutdown one server

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the NFS or gluster volume. While the file is writing data poweroff server_1.

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From server NFS or gluster

While the file copying is running <code_bash> poweroff </code>

2.12 From one client write a big file and shutdown other server

Same test 2.11 but poweoff server_2

2.13 From one client write a big file and disable the network interface

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the NFS or gluster volume. While the file is writing data ifdown interface of server_1 where NFS or gluster are using. Check with ps -ef in the server there are no gluster or NFS process (if there are NFS/pcsd or glusterfsd process, try to leave the process alive or try in second instance to kill that process)

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From server NFS or gluster

While the file copying is running. We suppose glusterd is using eth1 <code_bash> ifdown eth1 ps -ef | grep -i gluster ps -ef | grep -i nfs ps -ef | grep -i pcsd </code>

If there are process NFS/pcsd or gluster running try to:

2.13.1 leave NFS/pcsd or glusterfsd process alive
2.13.2 kill NFS/pcsd or glusterfsd process

2.14 From one client write a big file and disable the network interface

Same test 2.11 but put down the eth1 from server_2

2.15 From one client write a big file and stop NFS/pcsd or glusterd process from all servers

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the gluster volume. While the file is writing data stop NFS/pcsd or glusterd process from all server. Check with ps -ef in the server there are no NFS/pcsd or gluster process (if there are NFS/pcsd or glusterfsd process, try to leave the process alive or try in second instance to kill that process)

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From all servers NFS/gluster

While the file copying is running <code_bash>

service glusterd stop
ps -ef | grep -i gluster
ps -ef | grep -i nfs
ps -ef | grep -i pcsd

</code>

If there are process gluster running try to:

2.15.1 leave NFS/pcsd or glusterfsd process alive
2.15.2 kill NFS/pcsd or glusterfsd process

2.16 From one client write a big file and disable the network interface from each server. After 3 minutes enable network interfaces

Mount the NFS or gluster volume from 1 client write 1 big file (250GB) in the NFS or gluster volume. While the file is writing data put down the network interface from all server. After 3 minutes put up the network interfaces.

From client

create the file in /root or where you have enough space for the big file, check md5 of the file, copy the file to the mount point where gluster volume has been mounted, at the end of the copy check md5. <code_bash>

 dd if=/dev/zero of=/root/BigFile_1 bs=1G count=254
 md5sum  /root/BigFile_1
 cp /root/BigFile_1 /mnt/
 md5sum /mnt/BigFile1

</code>

From all server

While the file copying is running <code_bash> ifdown eth1 ps -ef | grep -i gluster ps -ef | grep -i pcsd ps -ef | grep -i nfs

sleep 180; ifup eth1 </code> If there are process NFS/pcsd or gluster running leave the process running

ONLY FOR GLUSTER (if needed) All the test above has been done with simple mount. If some tests fail try to mount the volume in back up mode and run again all the tests above: <code_bash> mount -t glusterfs -obackup-volfile-servers=<ip_gluster_server_2> <ip_gluster_server_1>:<volume_name> /mnt </code>

Tests from Openstack dashboard

Prerequisite

from compute

Use only a compute node which has the nova /var/lib/nova/instances path mounted in NFS or gluster volume.

Gluster

<code_bash> mount -t glusterfs -obackup-volfile-servers=<ip_gluster_server_2> <ip_gluster_server_1>:<volume_name> /var/lib/nova/instances </code>

NFS

<code_bash> mount -t nfs <vip_nfs_server> :<volume_nfs> /var/lib/nova/instances </code>

form NFS or gluster server

Ensure the volume has the right permissions for the user nova (usually by default uid 162 and gid 162). From the one server:

Gluster

<code_bash> gluster volume set <volume_name> storage.owner-uid 162 gluster volume set <volume_name> storage.owner-gid 162 </code>

NFS

<code_bash> chown 162.162 <lvm_name_path> </code>

Create 5 virtual hosts in the volume mounted from gluster, and check the virtual hosts are running and filesystem is not in read mode every step in the testplan below:

3.1 From first server stop NFS/pcsd or glusterd process. Check the virtual hosts.

form gluster server 1

<code_bash>

service glusterd stop

</code>

form NFS server 1

<code_bash>

service nfs stop
service pcsd stop

</code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.2 Start NFS/pcsd or glusterd process from first server and stop the NFS/pcsd or glusterd process from second server. Check the virtual hosts.

form server 1

Gluster

<code_bash>

service glusterd start

</code>

NFS

<code_bash>

service nfs start
service pcsd start

</code>

form server 2

Gluster

<code_bash>

service glusterd stop

</code>

NFS

<code_bash>

service nfs stop
service pcsd stop

</code>

from dashborad or from nova client

Check the vm are running and they works fine.

Start all NFS/pcsd or glusterd on all the servers

3.3 Shutdown first server. Check the virtual hosts.

form NFS or gluster server 1

<code_bash> poweoff </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.4 Power on fist server, ensure Basic Server functionality tests are fine. Shutdown second server. Check the virtual hosts.

form NFS or gluster server 1

power on and check the NFS/pcsd or gluster process are running. Check with the volume status and info the integrity of the volume or with lvm commands.

Gluster

<code_bash> service gluster status gluster volume status gluster volume info </code>

NFS

<code_bash> service nfs status service pcsd status lvm lvdisplay </code>

form gluster server 2

<code_bash> poweoff </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.5 Power on second server, ensure Basic Server functionality tests are fine.

form gluster server 2

power on and check the NFS/pcsd or gluster process are running and check with the volume status and info the integrity of the volume.

Gluster

<code_bash> service gluster status gluster volume status gluster volume info </code>

NFS

<code_bash> service nfs status service pcsd status lvm lvdisplay </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.6 Kill all NFS/pcsd or gluster process from first server. Check the virtual hosts.

form NFS or gluster server 1 <code_bash>

for PIDS in `ps -ef | grep -i -e nfs -e pcsd -e gluster | awk '{ print $1;}'`; do
   kill -9 $PIDS;
 done 

</code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.7 Restart all NFS/pcsd or gluster process on server 1 and kill all NFS/pcsd or gluster process from second server.

form NFS or gluster server 1 <code_bash> service glusterd start service nfs start service pcsd start </code>

form NFS or gluster server 2 <code_bash>

for PIDS in `ps -ef | grep -i -e nfs -e pcsd -e gluster | awk '{ print $1;}'`; do
   kill -9 $PIDS;
 done 

</code>

3.8 Disable the network interface NFS or gluster is using from first server. Check the virtual hosts.

form NFS or gluster server 1 (we suppose the eth1 is the interface where NFS or gluster is running

<code_bash> ifdown eth1 </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.9 Enalbe the network interface NFS or gluster is using from first server, check NFS/pcsd or gluster service and disable on the network interface on second server. Check the virtual hosts.

form NFS or gluster server 1 (we suppose the eth1 is the interface where gluster is running

Gluster

<code_bash> ifup eth1 service gluster status gluster volume status gluster volume info </code>

NFS

<code_bash> ifup eth1 service nfs status service pcsd status lvm lvdisplay </code>

form gluster server 2 (we suppose the eth1 is the interface where gluster is running

<code_bash> ifdown eth1 </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.10 Disable the network interface NFS or gluster is using from each server. Check the virtual hosts. After 2 minutes enable network interfaces. Check the virtual hosts.

form NFS or gluster server 1 (we suppose the eth1 is the interface where NFS or gluster is running)

<code_bash> ifdown eth1 sleep 120; ifup eth1 </code>

form gluster server 2 (we suppose the eth1 is the interface where gluster is running)

<code_bash> ifdown eth1 sleep 120; ifup eth1 </code>

from dashborad or from nova client

Check the vm are running and they works fine.

3.11 Disable the switch port where the network interface of first server is connected.

Find the switch port where the network interface of the fist server, the one NFS or gluster process is using and block the port, simulating a cable cut off.

from dashborad or from nova client

Check the vm are running and they works fine.

3.10 Enable the switch port where the network interface of first server is connected and disable switch port where the network interface of second server is connected.

Find the switch port where the network interface of the second server, the one NFS or gluster process is using and block the port, simulating a cable cut off from second server.

from dashborad or from nova client

Check the vm are running and they works fine.

3.12 Disable the switch ports where the network interfaces of first and second server is connected. Wait 3 minutes and enable again their

from dashborad or from nova client

Check the vm are running and they works fine, before, during and after the port blocking.

progetti/cloud-areapd/testplan.txt · Last modified: 2015/05/08 10:08 by straldi@infn.it

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki