Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Both sample batch scripts for some popular software packages and sample batch scripts for general use on TRACE-HPC are available.
For more information on how to run a job on TRACE-HPC, what partitions are available, and how to submit a job, see the Running Jobs section of this user guide.

Sample batch scripts for popular software packages

Sample scripts for some popular software packages are available on TRACE-HPC in the directory /opt/packages/examples.  There is a subdirectory for each package, which includes the script along with input data that is required and typical output.
See the documentation for a particular package for more information on using it and how to test any sample scripts that may be available.

Sample batch scripts for common types of jobs

Sample TRACE-HPC batch scripts for common job types are given below.
Note that in each sample script:

  • The bash shell is used, indicated by the first line '!#/bin/bash'.  If you use a different shell some Unix commands will be different.
  • For username and groupname you must substitute your username and your appropriate Unix group.

Sample scripts are available for

Sample batch script for a job in the RM partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM
#SBATCH -t 5:00:00
#SBATCH --ntasks-per-node=128
# type 'man sbatch' for more information and options
# this job will ask for 1 full RM node (128 cores) for 5 hours
# this job would potentially charge 640 RM SUs
#echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
# run a pre-compiled program which is already in your project space
./a.out

Sample script for a job in the RM-shared partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM-shared
#SBATCH -t 5:00:00
#SBATCH --ntasks-per-node=64
# type 'man sbatch' for more information and options
# this job will ask for 64 cores in RM-shared and 5 hours of runtime
# this job would potentially charge 320 RM SUs
#echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
# run a pre-compiled program which is already in your project space
./a.out
 

Sample batch script for a job in the RM-512 partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p RM-512
#SBATCH -t 5:00:00
#SBATCH --ntasks-per-node=128
# type 'man sbatch' for more information and options
# this job will ask for 1 full RM 512GB node (128 cores) for 5 hours
# this job would potentially charge 640 RM SUs
#echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
# run a pre-compiled program which is already in your project space
./a.out

Sample batch script for a job in the EM partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p EM
#SBATCH -t 5:00:00#SBATCH -n 96
# type 'man sbatch' for more information and options
# this job will ask for 1 full EM node (96 cores) and 5 hours of runtime
# this job would potentially charge 480 EM SUs
# echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
#run pre-compiled program which is already in your project space
./a.out

Sample batch script for a job in the GPU partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p GPU
#SBATCH -t 5:00:00
#SBATCH --gpus=v100-32:8
#type 'man sbatch' for more information and options
#this job will ask for 1 full v100-32 GPU node(8 V100 GPUs) for 5 hours
#this job would potentially charge 40 GPU SUs
#echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
#run pre-compiled program which is already in your project space
./gpua.out

Sample batch script for a job in the GPU-shared partition

#!/bin/bash
#SBATCH -N 1
#SBATCH -p GPU-shared
#SBATCH -t 5:00:00
#SBATCH --gpus=v100-32:4
#type 'man sbatch' for more information and options
#this job will ask for 4 V100 GPUs on a v100-32 node in GPU-shared for 5 hours
#this job would potentially charge 20 GPU SUs
#echo commands to stdout
set -x
# move to working directory
# this job assumes:
# - all input data is stored in this directory
# - all output should be stored in this directory
# - please note that groupname should be replaced by your groupname
# - username should be replaced by your username
# - path-to-directory should be replaced by the path to your directory where the executable is
cd /ocean/projects/groupname/username/path-to-directory
#run pre-compiled program which is already in your project space
./gpua.out