User Tools

Site Tools


strutture:cnaf:clusterhpc:using_the_cnaf_hpc_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
strutture:cnaf:clusterhpc:using_the_cnaf_hpc_cluster [2021/02/16 14:17]
dcesini@infn.it [An MPI submission script]
strutture:cnaf:clusterhpc:using_the_cnaf_hpc_cluster [2021/03/05 14:54] (current)
dcesini@infn.it [Alternative MPI multinode submission]
Line 126: Line 126:
 </code> </code>
  
-==== Submitting MPI Jobs ====+==== Submitting MPI Jobs via mpirun.lsf (obsolete)====
  
 Currently only **OpenMPI** jobs have been tested on the HPC cluster. \\ Currently only **OpenMPI** jobs have been tested on the HPC cluster. \\
Line 193: Line 193:
 1) Create automatically the machine file to be using in the mpirun: 1) Create automatically the machine file to be using in the mpirun:
  
-    echo $LSB_HOSTS | awk '{split($0,array," ")} END {for (i in array) printf ("%s\n",array[i])}' | awk '{count[$0]++} END {for (word in count) print word,"slots=" count[word]}' > /home/HPC/tandric/specfem3d/mymachine.txt+    echo $LSB_HOSTS | awk '{split($0,array," ")} END {for (i in array) printf ("%s\n",array[i])}' | awk '{count[$0]++} END {for (word in count) print word,"slots=" count[word]}' > /home/HPC/username/mymachine.txt
  
 2) Use this command to launch mpirun: 2) Use this command to launch mpirun:
  
-    mpirun --machinefile /home/HPC/tandric/specfem3d/machinefile.txt -x PSM_SHAREDCONTEXTS_MAX=8 -np $NPROC /home/HPC/tandric/specfem3d/bin/xspecfem3D+    mpirun --machinefile /home/HPC/username/machinefile.txt -x PSM_SHAREDCONTEXTS_MAX=8 -np $LSB_DJOB_NUMPROC /home/HPC/username/executablename
  
-All together this became) Submit a script with:  +A possible bsub submission is:  
  
-    bsub -q hpc_inf_SL7  -n 16 -R "span[ptile=8]" -o testmpimy.out -e testmpimy.err /home/HPC/tandric/specfem3d/run_this_example.sh+    bsub -q hpc_inf_SL7  -n 16 -R "span[ptile=8]" -o testmpimy.out -e testmpimy.err /home/HPC/username/run_this_example.sh
  
-where in the run_this_example.sh script you have the previous commands. So in this case:+where in the run_this_example.sh script you launch the previous commands:
  
 ----run_this_example.sh---- ----run_this_example.sh----
Line 209: Line 209:
     #!/bin/bash     #!/bin/bash
  
-    echo $LSB_HOSTS | awk '{split($0,array," ")} END {for (i in array) printf ("%s\n",array[i])}' | awk '{count[$0]++} END {for (word in count) print word,"slots=" count[word]}' > /home/HPC/tandric/specfem3d/mymachine.txt+    echo $LSB_HOSTS | awk '{split($0,array," ")} END {for (i in array) printf ("%s\n",array[i])}' | awk '{count[$0]++} END {for (word in count) print word,"slots=" count[word]}' > /home/HPC/username/mymachine.txt
  
-    mpirun --machinefile /home/HPC/tandric/specfem3d/machinefile.txt -x PSM_SHAREDCONTEXTS_MAX=8 -np $NPROC /home/HPC/tandric/specfem3d/bin/xspecfem3D+    mpirun --machinefile /home/HPC/username/machinefile.txt -x PSM_SHAREDCONTEXTS_MAX=8 -np $LSB_DJOB_NUMPROC /home/HPC/username/executablename
 ==== Submitting GPU Jobs ==== ==== Submitting GPU Jobs ====
  
strutture/cnaf/clusterhpc/using_the_cnaf_hpc_cluster.1613485048.txt.gz ยท Last modified: 2021/02/16 14:17 by dcesini@infn.it