Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the debug-bar domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /srv/www/hpc/wp-includes/functions.php on line 6121
Transition to SLURM – University of Alabama Research Computing | The University of Alabama

SLURM is a queue management system replacing the commercial LSF scheduler as the job manager on UAHPC.

SLURM is similar to LSF – below is a quick reference from HPC Wales comparing commands between the two. For those of you coming from an environment with a different scheduler or wanting more details, see this pdf for a comparison of commands between PBS/Torque, SLURM, LSF, SGE, and LoadLeveler: Scheduler Commands Cheatsheet

More documentation can be found at the SLURM website.

LSF to Slurm Quick Reference

Commands

LSFSlurmDescription
bsub < script_filesbatch script_fileSubmit a job from script_file
bkill 123scancel 123Cancel job 123
bjobssqueue orslurmtopList user’s pending and running jobs
bqueuessinfoCluster status with partition (queue) list
bqueuessinfo -sWith ‘-s’ a summarised partition list, which is shorter and simpler to interpret.

Job Specification

LSFSlurmDescription
#BSUB#SBATCHScheduler directive
-q queue_name-p main --qos queue_name
or
-p owners --qos queue_name
Queue to ‘queue_name’
-n 64-n 64Processor count of 64
-W [hh:mm:ss]-t [minutes]
or
-t [days-hh:mm:ss]
Max wall run time
-o file_name-o file_nameSTDOUT output file
-e file_name-e file_nameSTDERR output file
-oo file_name-o file_name --open-mode=appendAppend to output file
-J job_name--job-name=job_nameJob name
-M 128--mem-per-cpu=128M
or
--mem-per-cpu=2G
Memory requirement
-R "span[ptile=16]"--tasks-per-node=16Processes per node
-P proj_code--account=proj_codeProject account to charge job to
-J "job_name[array_spec]"--array=array_specJob array declaration

Job Environment Variables

LSFSlurmDescription
$LSB_JOBID$SLURM_JOBIDJob ID
$LSB_SUBCWD$SLURM_SUBMIT_DIRSubmit directory
$LSB_JOBID$SLURM_ARRAY_JOB_IDJob Array Parent
$LSB_JOBINDEX$SLURM_ARRAY_TASK_IDJob Array Index
$LSB_SUB_HOST$SLURM_SUBMIT_HOSTSubmission Host
$LSB_HOSTS
$LSB_MCPU_HOST
$SLURM_JOB_NODELISTAllocated compute nodes
$LSB_DJOB_NUMPROC$SLURM_NTASKS
(mpirun can automatically pick this up from Slurm, it does not need to be specified)
Number of processors allocated
 $SLURM_JOB_PARTITIONQueue