The command to view the usage summary of the accounts associated with your username is the following:
login> saldo -b
login> pbsnodes -a | egrep '(Mom|available.mem|available.cpuspeed|available.nmics|available.ngpus)'
THe job management facility adopted by CINECA is PBS: Batch Scheduler PBS
Available Queues:
Script example (script.pbs)
#PBS -q debug #PBS -l select=2:ncpus=16:mem=15GB:cpuspeed=3GHz #PBS -A INFN_EURORA ...
Submit your job
qsub script.pbs
Monitor your job
qstat [-u username]
Cancel your job
qdel JOB.id
Interactive example (option -I):
qsub -q debug -l nodes=node021:ncpus=1 -A CON13_INFN -I > cat $PBS_NODEFILE > exit
Asking more memory to allow demanding compilations
qsub -q debug -l nodes=node021:ncpus=16:mem=15gb -A CON13_INFN -I
$HOME (/eurora/home/userexternal/<username>) (permanent/ backuped) $CINECA_SCRATCH (/gpfs/scratch/userexternal/<username>) (temporary)
Use the local command "cindata" to query for disk usage and quota ("cindata -h" for help):
cindata
http://www.hpc.cineca.it/content/eurora-user-guide#programming
NOTE: The MIC system libraries are ditributed through the following shared directories:
Basic set of examples for the different programming models (CPU only, CPU+GPU, CPU+MIC)
Example of PBS file
#!/bin/bash #PBS -l select=2:mpiprocs=2:ncpus=16:mem=15GB:cpuspeed=3GHz #PBS -N d2d_bdir-remote #PBS -l walltime=00:10:00 #PBS -q debug #PBS -A CON13_INFN
qsub -A CON13_INFN -I -l select=1:ncpus=16:ngpus=2 -q debug
module load gnu/4.6.3 module load cuda/5.0.35 .....
Example of PBS file
#!/bin/bash #PBS -l select=2:mpiprocs=2:ncpus=16:ngpus=2 #PBS -N d2d_bdir-remote #PBS -l walltime=00:10:00 #PBS -q debug #PBS -A CON13_INFN # load required modules module load gnu module load cuda mpirun .....
qsub -A INFNG_test -I -l select=1:ncpus=16:nmics=1
module load intel intelmpi mkl source $INTEL_HOME/bin/compilervars.sh intel64 export I_MPI_MIC=enable
qsub -A CON13_INFN -I -l select=1:ncpus=16:nmics=2 -q debug module load intel module load intelmpi source $INTEL_HOME/bin/compilervars.sh intel64 ./exe-offload.x
Example of PBS file
#!/bin/bash #PBS -l select=1:ncpus=16:nmics=2 #PBS -l walltime=00:20:00 #PBS -q debug #PBS -A CON13_INFN # load required modules module load intel intelmpi mkl source $INTEL_HOME/bin/compilervars.sh intel64 export I_MPI_MIC=enable export MIC0=$(head -n 1 $PBS_NODEFILE | sed "s/[(DDD).]/$1-mic0./") export MIC1=$(head -n 1 $PBS_NODEFILE | sed "s/[(DDD).]/$1-mic1./") cd <workdir> export MIC_PATH= export MIC_PATH=$MIC_PATH:/eurora/prod/compilers/intel/cs-xe-2013/binary/composer_xe_2013/mkl/lib/mic/ export MIC_PATH=$MIC_PATH:/eurora/prod/compilers/intel/cs-xe-2013/binary/composer_xe_2013/lib/mic mpirun -genv LD_LIBRARY_PATH $MIC_PATH -host ${MIC0},${MIC1} -perhost 1 ./imb/3.2.4/bin/IMB-MPI1.mic pingpong
http://www.prace-ri.eu/Best-Practice-Guide-Intel-Xeon-Phi-HTML?lang=en#id-1.7.3
Network fabrics available for the Intel Xeon Phi coprocessor: shm, tcp, ofa, dapl
The Intel MPI library tries to automatically use the best available network fabric detected (usually shm for intra-node communication and InfiniBand (dapl, ofa) for inter-node communication).
The default can be changed by setting the I_MPI_FABRICS environment variable to I_MPI_FABRICS=<fabric> or I_MPI_FABRICS=<intra-node fabric>:<inter-nodes fabric>.
The availability is checked in the following order: shm:dapl, shm:ofa, shm:tcp.
2013/08/28