strutture:lnf:dr:calcolo:sistemi:ceph:bootstrap_per_benchmark
Table of Contents
Bootstrap del cluster per benchmark
Le operazioni vanno eseguite sulla sulla macchina ceph-ctrl-00
( con ssh root@mgmt-ceph-ctrl-00
), o su un’altra macchina facente parte della controlplane.
Bootstrap e configurazione iniziale del cluster
cephadm bootstrap --cluster-network 192.168.32.0/24 --mon-ip 192.168.33.10 --ssh-user cephadm --allow-fqdn-hostname ceph orch apply mon --unmanaged ceph orch apply mgr --unmanaged ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-data-01 ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-data-00 ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-data-02 ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-ctrl-02 ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-ctrl-01 ssh-copy-id -f -i /etc/ceph/ceph.pub cephadm@ceph-ctrl-00 ceph orch host add ceph-ctrl-01.lnf.infn.it 192.168.33.11 _admin mon mgr mds ceph orch host add ceph-ctrl-02.lnf.infn.it 192.168.33.12 _admin mon mgr mds ceph orch host add ceph-data-02.lnf.infn.it 192.168.33.22 ceph orch host add ceph-data-01.lnf.infn.it 192.168.33.21 ceph orch host add ceph-data-00.lnf.infn.it 192.168.33.20 ceph orch host label add ceph-ctrl-00.lnf.infn.it mon ceph orch host label add ceph-ctrl-00.lnf.infn.it mgr ceph orch host label add ceph-ctrl-00.lnf.infn.it mds ceph orch apply mon --placement="label:mon" ceph orch apply mgr --placement="label:mgr" ceph orch apply mds benchmark --placement="2 label:mds"
Dry run per la generazione dello schema di deploy degli OSD
ceph orch apply -i osd_service.yaml --dry-run
Verificare dopo alcuni secondi che la configurazione generata sia quella desiderata ripetendo il comando.
Per installare gli OSD utilizando gli SSD come db utilizzare osd_service.yaml
con il seguente contenuto:
service_type: osd service_id: hdd_plus_db_on_ssd placement: hosts: - ceph-data-00.lnf.infn.it - ceph-data-01.lnf.infn.it - ceph-data-02.lnf.infn.it spec: data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf - /dev/sdg - /dev/sdh - /dev/sdi - /dev/sdj - /dev/sdk - /dev/sdl - /dev/sdm - /dev/sdn - /dev/sdo - /dev/sdp - /dev/sdq db_devices: paths: - /dev/sdr - /dev/sds - /dev/sdt - /dev/sdu - /dev/sdv - /dev/sdw - /dev/sdx - /dev/sdy db_slots: 2
Per utilizzare gli SSD e HDD come OSD separati utilizzare invece:
service_type: osd service_id: hdd_and_ssd_apart placement: hosts: - ceph-data-00.lnf.infn.it - ceph-data-01.lnf.infn.it - ceph-data-02.lnf.infn.it spec: data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf - /dev/sdg - /dev/sdh - /dev/sdi - /dev/sdj - /dev/sdk - /dev/sdl - /dev/sdm - /dev/sdn - /dev/sdo - /dev/sdp - /dev/sdq - /dev/sdr - /dev/sds - /dev/sdt - /dev/sdu - /dev/sdv - /dev/sdw - /dev/sdx - /dev/sdy
Installazione OSD
ceph orch apply -i osd_service.yaml
Creazione pool e immagini per il benchmark
ceph fs volume create benchmark ceph fs authorize benchmark client.benchmark / rw ceph osd pool create rbd 128 128 replicated rbd pool init rbd ceph auth caps client.benchmark mds "allow rw fsname=benchmark" mon "allow r fsname=benchmark, profile rbd" osd "allow rw tag cephfs data=benchmark, profile rbd pool=rbd" mgr 'profile rbd pool=rbd' ceph osd pool set cephfs.benchmark.data pg_num 128 ceph osd pool set cephfs.benchmark.data pgp_num 128 ceph osd pool set cephfs.benchmark.meta pg_num 128 ceph osd pool set cephfs.benchmark.meta pgp_num 128 ceph osd pool set rbd pg_num 128 ceph osd pool set rbd pgp_num 128
Informazioni da utilizzare sulla macchina client
ceph config generate-minimal-conf # -> /etc/ceph/ceph.conf ceph auth get client.benchmark # -> /etc/ceph/ceph.client.benchmark.keyring
Configurazione client
Sulla macchina client:
mkdir /mnt/benchmark_rbd mkdir /mnt/benchmark_cephfs echo '/dev/rbd/rbd/benchmark /mnt/benchmark_rbd xfs noauto 0 0' >> /etc/fstab echo 'benchmark@.benchmark=/ /mnt/benchmark_cephfs ceph defaults 0 0' >> /etc/fstab rbd create --id benchmark --size 550G benchmark rbd --id benchmark map benchmark dd if=/dev/urandom of=/dev/rbd0 bs=4M count=137000 mkfs.xfs /dev/rbd0 rbd --id benchmark unmap benchmark echo 'benchmark id=benchmark' >> /etc/ceph/rbdmap systemctl start rbdmap.service
Configurazione pool SSD/HDD only
ceph osd crush rule create-replicated ssd-only default host ssd ceph osd crush rule create-replicated hdd-only default host hdd ceph osd pool set cephfs.benchmark.data crush_rule hdd-only # (oppure ssd-only) ceph osd pool set cephfs.benchmark.meta crush_rule hdd-only # (oppure ssd-only) ceph osd pool set rbd crush_rule hdd-only # (oppure ssd-only)
strutture/lnf/dr/calcolo/sistemi/ceph/bootstrap_per_benchmark.txt · Last modified: 2024/09/19 12:51 by rorru@infn.it