If you would like to utilize the NVIDIA GPUs on the cluster for your compute job, below are some tips to help your job do so.

  • Make sure you ask the scheduler for a GPU in your job request (submit script). You append the GPU request on the #PBS directive in which you ask for CPUs, for example:
#PBS -l nodes=1:ppn=1:gpus=1,mem=16GB


  • Unless your code has built-in GPU support (for example, Matlab), you may want to load one of the available CUDA Toolkit modules; currently we offer 3: cuda/7.5, cuda/8.0, or cuda/9.0. You can load one of the 3 available by adding a “module load…” line to your submit script. You can also issue a “module list” command to display what modules are currently loaded. The CUDA binaries (like nvcc) and libraries should now be available to your compute job:
module load cuda/8.0

module list
Currently Loaded Modulefiles:
  1) pymods/2.7.5    2) perlmods/5.16.3   3) cuda/8.0

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61


  • If your code depends on The NVIDIA CUDA Deep Neural Network (cuDNN) GPU-accelerated library, you must load an available cuDNN module to set up your $LD_LIBRARY_PATH. There are several cudnn modules to choose from, depending on what cudnn version *and* what CUDA Toolkit version you require. Please use the command “module avail cudnn” to see what’s available.
module load cudnn/6.0-cuda8

module list
Currently Loaded Modulefiles:
  1) pymods/2.7.5    2) perlmods/5.16.3   3) cuda/8.0   4) cudnn/6.0-cuda8


  • If you would like to target a specific model of GPU, you can add a "feature" tag to your request. For example the following directive requests one node with one traditional computing core and one GTX-1080Ti GPU. There is also a "k80" tag for requesting one of the existing Telsa K80 GPUs. The following directive requests one node with one traditional computing core and one K80 GPU:
### If you prefer an NVIDIA Tesla GTX-1080ti, specify the "gtx1080ti" feature tag:
#PBS -l nodes=1:ppn=1:gpus=1:gtx1080ti

### If you prefer an NVIDIA Tesla K80, specify the "k80" feature tag:
#PBS -l nodes=1:ppn=1:gpus=1:k80

What are Environment Modules?

The environment modules package is a tool that allows you to quickly and easily modify your shell environment to access different software packages. Research Computing offers a large (and growing) number of software packages to users, and each package may contain several tools, manual pages and libraries, or it may require special setup to work properly. Some software packages come in several versions or flavors, many of which conflict with each other. Modules allows you to tailor your shell to access exactly the packages you need by setting up the relevant environment variables for you, and automatically avoiding many possible conflicts between packages.

Command Summary

module avail List available modules
module load Load module named
module unload Unload module named
module whatis Give description of module
module list List modules that are loaded in your environment
module purge Unload all currently loaded modules from your environment
module display Give the rules for module


Example Usage

$ module avail

------------------------- /usr/share/Modules/modulefiles ----------------------------
dot module-git module-info modules null use.own

---------------------------------- /apps/usr/pbs/modules/compilers -----------------------------------
anaconda2/2019.07        bazel/0.21.0             intel/16.0.0             pymods/2.7.5
anaconda2/2019.07-cuda10 bazel/0.22.0             intel/19.0.0(default)    pypy/5.3.1
anaconda2/5.0.1(default) bazel/0.25.2(default)    julia/1.0.1              scala/2.10.4(default)
anaconda2/5.0.1-cuda92   gcc/6.4.0                lua/5.3.5                scala/2.11.7
anaconda3/2019.07        gcc/7.3.0(default)       openjdk/11               yasm/1.3.0
anaconda3/2019.07-cuda10 gcc/8.2.0                openjdk/13(default)
anaconda3/5.0.1(default) ghc/7.10.3               perlmods/5.16.3
anaconda3/5.0.1-cuda92   intel/14.0.3             pgi/18.7

------------------------------------- /apps/usr/pbs/modules/lib --------------------------------------
armadillo/8.400.0                 htslib/1.6(default)
armadillo/9.800.1(default)        htslib/1.9
arpack/2.1                        intel-rtl/16.0.0
arpack-ng/3.5.0                   intel-rtl/19.0.0(default)
boost/1.65.1                      netcdf/4.6.3
ceres-solver/1.14.0(default)      netcdf/4.6.3-mpi
ceres-solver/1.14.0-cuda          netcdf/4.7.2(default)
clblas/1.10                       netcdf/4.7.2-mpi
cudnn/6.0-cuda8                   openblas/0.2.18
cudnn/7.0-cuda8                   openblas/0.2.20(default)
cudnn/7.0-cuda9                   opencv/3.1.0(default)
cudnn/7.2.1-cuda9                 opencv/3.1.0-cuda
cudnn/7.2.1-cuda9.2               opencv/4.1.0
cudnn/7.4.2-cuda10                opencv/4.1.0-cuda
cudnn/7.4.2-cuda9.2(default)      opengv/1.0
cudnn/7.6.5-cuda10                root/6.12.04
cudnn/7.6.5-cuda10.2              suitesparse/5.4.0(default)
eigen/3.3.6                       suitesparse/5.4.0-cuda
eigen/3.3.7(default)              superlu/5.2.1
gatb-core/1.4.1                   superlu_dist/5.3.0
google-code/2015                  tensorrt/
google-code/2019(default)         tensorrt/
hdf5/1.10.2                       tensorrt/
hdf5/1.10.2-mpi                   tensorrt/
hdf5/1.10.5(default)              tensorrt/
hdf5/1.10.5-mpi                   trilinos/12.14.1-mpi
hdf5/1.8.16                       trilinos/12.6.4-mpi(default)
htslib/1.4.1                      zeromq/4.1.4

------------------------------------- /apps/usr/pbs/modules/mpi --------------------------------------
mpich/3.2.1(default)   openmpi/2.1.5-pgi      openmpi/4.0.1          openmpi/4.0.3-intel
mpich/3.3.1            openmpi/3.1.2(default) openmpi/4.0.1-intel    openmpi/4.0.3-pgi
openmpi/2.1.5          openmpi/3.1.2-intel    openmpi/4.0.1-pgi
openmpi/2.1.5-intel    openmpi/3.1.2-pgi      openmpi/4.0.3

------------------------------------- /apps/usr/pbs/modules/apps -------------------------------------
abaqus/2017                                 miniasm/0.2
abaqus/6.10-2                               minimap2/2.10
abaqus/6.13-4(default)                      minimap2/2.17(default)
abyss/2.1.1(default)                        mirdeep2/0.1.2
abyss/2.1.5                                 mirdp2/1.1.4
agwg-merge/180114                           mir-prefer/0.24
allennlp/0.8.3                              mosek/8.1
allpathslg/52488                            mpiblast/1.6.0
ambertools/18(default)                      mpp-dyna/10.2.0
ambertools/18-mpi                           mpp-dyna/11.0.0(default)
angsd/0.918                                 mpp-dyna/9.3.0
angsd/0.930(default)                        mrbayes/3.2.2
ansa/13.1.3                                 namd/2.11-mcore
anvio/5.3                                   namd/2.11-mcore-cuda
augustus/3.3                                namd/2.11-mpi(default)
augustus/3.3.2(default)                     namd/2.13-mcore
augustus/3.3.3                              namd/2.13-mcore-cuda
bamtools/2.4.1                              namd/2.13-mpi
bamtools/2.5.1                              nasm/2.14
bbtools/38.76                               nco/4.4.8
bcftools/1.3.1                              netbeans/8.0.2
bcftools/1.6(default)                       netbeans/8.1(default)
bcftools/1.9                                netlogo/5.0.4
bedtools2/2.26.0                            netlogo/5.1.0
bedtools2/2.29.0(default)                   netlogo/6.1.1(default)
bismark/0.18.0                              node.js/4.4.0
blasr/5.3                                   openbabel/2.3.2
blast/2.3.0+                                openfoam/1706
blast/2.5.0+(default)                       openfoam/1806(default)
blast/2.9.0+                                openfoam/1906
blatsuite/36                                opensfm/0.2.0
bowtie2/2.2.9                               orthofinder/2.2.7
bowtie2/2.4.1(default)                      pacbio/2018.8
braker/2.0.6                                pacbio/2019.8(default)
braker/2.1.2(default)                       parallel/20190322(default)
braker/2.1.5                                parallel/20200122
busco/3.0.2                                 paraview/5.0.0-mpi
bwa/0.7.12                                  paraview/5.7.1-mpi(default)
bwa/0.7.17(default)                         pbsuite/15.8.24
c2x/2.26b                                   pdl/2.015
caffe/1.0(default)                          peridigm/1.4.1-mpi(default)
caffe/1.0.0rc3                              peridigm/1.5.0-mpi
caffe/1.0.0rc3-cuda8                        picard/2.18.29(default)
caffe/1.0-cuda8                             picard/2.9.2
canu/1.8                                    platanus/1.2.4
cmake/3.10.2                                plink/1.90b3.32
cmake/3.12.3(default)                       plink/2.00a2LM(default)
cna/2.0                                     poy/5.0.1
cnvnator/0.3.3                              poy/5.1.2(default)
crossmap/0.2.9(default)                     psmc/0.6.5
crossmap/0.3.3                              pytorch/0.4.0-anaconda3-cuda9.2-sm3.7
cuda/10.0                                   pytorch/0.4.0-anaconda3-cuda9.2-sm6.1
cuda/10.2                                   pytorch/1.0.1-anaconda3-cuda10.0(default)
cuda/8.0                                    pytorch/1.2.0-anaconda3-cuda10.0
cuda/9.0                                    qiime/1.9.1
cuda/9.2(default)                           qiime2/2018.6
cufflinks/2.2.1                             qiime2/2018.8
cutadapt/1.15(default)                      qiime2/2019.1(default)
cutadapt/1.18                               R/3.4.3
ddocent/2.7.8                               R/3.5.0(default)
eclipse/4.3.2                               R/3.6.0
emboss/6.6.0                                racon/1.3.1
espresso/5.3-intel-mpi                      raxml/7.4.2
espresso/6.3-intel-mpi(default)             raxml/7.4.2-mpi
examl/3.0.17                                raxml/8.2.12(default)
examl/3.0.21(default)                       raxml/8.2.12-mpi
exonerate/2.4.0                             raxml/8.2.4
face-py-faster-rcnn/1.0-cuda8               raxml/8.2.4-mpi
fastqc/0.11.5                               repdenovo/0.0
fastx_toolkit/0.0.14                        repeatmasker/4.0.8
fds/6.7.0                                   repeatmodeler/1.0.11
fds/6.7.3(default)                          repeatscout/1.0.5
ffmpeg/2.8.13                               rmblast/2.6.0
ffmpeg/3.2.14                               rosetta/2015.02
ffmpeg/4.2.1(default)                       rosetta/2016.10
ffmpeg/4.2.1-cuda10                         rosetta/2019.07(default)
firefox/58.0.2                              rstudio/1.1.442
fmlrc/0.1.2                                 rstudio/1.2.5001(default)
freebayes/1.3.1                             samtools/0.1.18
garli/0.942                                 samtools/1.3.1
garli/0.942-mpi                             samtools/1.6(default)
garli/2.01(default)                         samtools/1.9
garli/2.01-mpi                              sas/9.4
gatk/3.8                                    seer/1.1.1-intel
genemark/4.32                               segnet/1.0.0
genemark/4.38(default)                      segnet/1.0.0-cuda8(default)
genemark/4.48                               segnet/1.0.0-cuda8-nodnn
git/2.19.2                                  seqgl/1.1.4
grass/7.0.3                                 seqkit/0.11.0
gromacs/2016.3(default)                     seqtk/1.2
gromacs/2016.3-cuda                         seqtk/1.3(default)
gromacs/2016.3-mpi                          shortbred/0.9.5
gromacs/2016.3-mpi-cuda                     siesta/3.2
gromacs/2018                                siesta/4.0.2(default)
gromacs/2018-cuda                           slim/2.5
gromacs/2018-mpi                            smog/2.0.2
gromacs/2018-mpi-cuda                       sowhat/0.36
gromacs/2019.3                              spades/3.11.1(default)
gromacs/2019.3-cuda                         spades/3.13.1
gromacs/2019.3-mpi                          spades/3.7.1
gromacs/2019.3-mpi-cuda                     sra-tools/2.8.2-1
gurobi/7.5.1                                sra-tools/2.9.4(default)
gurobi/8.0.1(default)                       stacks/1.4.7
gurobi/8.1.1                                star/2.7.0c
hecaton/2020-02                             starccm/13.04
hic-pro/2.11.1                              starccm/14.02
hisat2/2.2.0                                starccm/2019.1
hmmer/3.2.1                                 starccm/2019.3(default)
humann2/0.11.2                              stata/11
interproscan/5.31-70.0                      subread/1.5.2
interproscan/5.34-73.0(default)             tensorflow/1.13-anaconda2-cuda10.0
interproscan/5.38-76.0                      tensorflow/1.13-anaconda3-cuda10.0
i-tasser/5.1                                tensorflow/1.14-anaconda2-cuda10.0
itk/4.9.0                                   tensorflow/1.14-anaconda3-cuda10.0(default)
jcvi/0.8.12                                 tensorflow/2.0-anaconda2-cuda10.0
jellyfish/2.2.6                             tensorflow/2.0-anaconda3-cuda10.0
kneaddata/0.7.2                             tophat/2.1.1
kraken/1.1                                  trim_galore/0.4.4
kwip/0.2.0                                  trimmomatic/0.38
lachesis/201701                             trinity/2.4.0
lammps/12Dec18-intel-gpu                    trinity/2.8.5(default)
lammps/12Dec18-intel-mpi(default)           tritex/2020-02
lammps/18Jun19-intel-gpu                    udunits/2.2.19
lammps/18Jun19-intel-mpi                    valgrind/3.13.0
lammps/31Mar17                              vcftools/0.1.15
lastz/1.04.00                               vcftools/0.1.16(default)
liggghts/3.7.0                              velvet/1.2.10
liggghts/3.8.0(default)                     vep/95.1
lordec/0.8                                  viennarna/2.4.13
ls-dyna/10.1.0                              visit/2.10.3
ls-dyna/11.0.0(default)                     visit/2.10.3-ib
ls-dyna/9.3.0                               visit/2.13.1
mafft/7.055woe                              visit/2.13.1-ib(default)
mafft/7.273woe(default)                     vmd/1.9.3-cuda75-text-egl
masurca/3.2.4                               vmd/1.9.3-cuda8-opengl(default)
masurca/3.2.7(default)                      vmd/1.9.3-text
masurca/3.3.4                               vscode/1.33
mathematica/11.2.0(default)                 vtk/6.2.0
mathematica/11.3.0                          vtk/7.0.0
mathematica/8.0                             vtk/7.0.0-mpi(default)
matlab/R2018a                               vtk/8.2.0
matlab/R2018b(default)                      vtk/8.2.0-mpi
matlab/R2019b                               wham/2.0.9
mauve/2015.02                               wise2/2.4.1
mcr/R2018a                                  wrf/3.9.1-intel-mpi
mcr/R2018b(default)                         wrf/3.9.1-intel-serial
mcr/R2019b                                  wrf/4.0.1-intel-mpi(default)
meme/4.12.0                                 wrf/4.0.1-intel-serial
meme/5.0.1(default)                         wtdbg/1.1
meme/5.0.5                                  wtdbg/2.3(default)
meraculous/2.2.5                            xcrysden/1.5.60
metabat/2.13                                xerces-c/3.2.1
metaphlan/2.7.2                             yade/2017.01a
metaphlan/2.7.7(default)                    yade/2020.01a(default)
methyldackel/0.2.1                          yices/2.4.2
minialign/0.4.4(default)                    z3/4.4.1

$ module avail matlab
------------------------------------- /apps/usr/pbs/modules/apps -------------------------------------
matlab/R2018a          matlab/R2018b(default) matlab/R2019b

$ module display matlab/R2018b

module-whatis	 MATLAB is a high-level language and interactive environment for numerical computation, visualization, and programming.
conflict	 matlab
module		 load gcc/7.3.0
setenv		 MATLAB_BASE /apps/pkg/matlab
setenv		 MATLAB_HOME /apps/pkg/matlab/R2018b
setenv		 MATLAB_DIR /apps/pkg/matlab/R2018b
prepend-path	 MATLABPATH /apps/pkg/matlab/toolbox_urc/xlwrite
prepend-path	 CLASSPATH /apps/pkg/matlab/toolbox_urc/xlwrite/jxl.jar:/apps/pkg/matlab/toolbox_urc/xlwrite/MXL.jar
prepend-path	 PATH /apps/pkg/matlab/R2018b/bin
prepend-path	 LD_LIBRARY_PATH /apps/pkg/matlab/R2018b/bin/glnxa64:/apps/pkg/matlab/R2018b/runtime/glnxa64

$ module load matlab/R2018b

$ module list
Currently Loaded Modulefiles:
  1) pymods/2.7.5      2) perlmods/5.16.3   3) matlab/R2018b


How the Modules are Organized and Grouped

The modules are organized into “categories”, which include: /apps/usr/pbs/modules/mpicompilersapps, and lib. Under each category, you will see “groups” of applications: openmpi, intel, pgi, to name a few. Within each group, there may be several versions to choose from. The group and version are separated with a “slash” (/).

Default Modules

You probably noticed some modules listed above are suffixed with a “(default)”. The “default” module is the module that will get loaded if you do not specify a version number. For example, we can load the “intel/16.0.0” module by omitting the version number:

$ module load intel

$ module list
Currently Loaded Modulefiles:
  1) pymods/2.7.5      2) perlmods/5.16.3   3) intel/19.0.0
Note: If you plan to load a version of a module that is not the default, 
then you must specify the version in the module load command.

Conflicts and Prerequisites

Some modules conflict with others, and some modules are prerequisites of others. Environment Modules handles both scenarios.

The following is an example of trying to load a module that is dependent upon another:

$ module display braker/2.1.5

module-whatis BRAKER2 is an unsupervised RNA-Seq-based genome annotation with GeneMark-ET and AUGUSTUS
conflict  braker
prereq	  augustus
prereq    bamtools
prereq    genemark
prereq    blast
prereq    samtools
setenv        BRAKER /apps/pkg/braker/2.1.5
prepend-path  PATH /apps/pkg/braker/2.1.5/scripts
$ module load braker/2.1.5
braker/2.1.5(13):ERROR:151: Module 'braker/2.1.5' depends on one of the module(s) 'augustus/3.3.3 augustus/3.3.2 augustus/3.3'
braker/2.1.5(13):ERROR:102: Tcl command execution failed: prereq augustus

You must first load one of the listed augustus modules, as well as one of each of the other prerequisite modules. You can do this in a single command, and you can exclude the versions if you are fine with loading the default versions of each prerequisite module:

$ module load augustus bamtools genemark blast samtools
$ module list
Currently Loaded Modulefiles:
  1) perlmods/5.16.3   3) bamtools/2.5.1    5) augustus/3.3.2    7) blast/2.5.0+
  2) pymods/2.7.5      4) samtools/1.6      6) genemark/4.38

Now you should be able to "module load" braker/2.1.5 without error.

More information

You can find more information about Environment Modules on


Torque is an Open Source scheduler based on the old PBS scheduler code. The following is a set of directions to assist a user in learning to use Torque to submit jobs to the URC cluster(s).  It is tailored specifically to the URC environment and is by no means comprehensive. 

Details not found in here can be found online at:

Some of the sample scripts displayed in the text are not complete so that the reader can focus specifically on the item being discussed.  Full, working examples of scripts and commands are provided in the Examples section at the end of this document.

Submitting a Job

To submit a job to the Copperhead cluster, you must first SSH into the Research Computing submit host, Scheduling a job in Torque requires creating a file that describes the job (in this case a shell script) and then that file is given as an argument to the Torque command “qsub” to execute the job.

First of all, here is a sample shell script ( describing a simple job to be submitted:

#! /bin/bash

# ==== Main ======

This script simply runs the ‘date’ command.  To submit it to the scheduler for execution, we use the Torque qsub command:

$ qsub -N "MyJob" -q "copperhead" -l procs=1

This will cause the script (and hence the date command) to be scheduled on the cluster. In this example, the “-N” switch gives the job a name, the “-q” switch is used to route the job to the “copperhead” queue, and the “-l” switch is used to tell Torque (PBS) how many processors your job requests.

Many of the command line options to qsub can also be specified in the shell script itself using Torque (PBS) directives. Using the previous example, our script ( could look like the following:


# ===== PBS OPTIONS =====
### Set the job name
#PBS -N "MyJob"

### Specify queue to run in
#PBS -q "copperhead"

### Specify number of CPUs for job
#PBS -l procs=1

# ==== Main ======

This reduces the number of command line options needed to pass to qsub. Running the command is now simply:

$ qsub

For the entire list of options, see the man page qsub i.e.

$ man qsub

Standard Output and Standard Error
In  Torque, any output that would normally print to stdout or stderr is collected into two files. By default these files are placed in the initial working directory where you submitted the job from and are named:

scriptname.{o}jobid for stdout
scriptname.{e}jobid for stderr

In our previous example (if we did not specify a job name with -n) that would translate to:

Where NNN is the job ID number returned by qsub.  If I named the job with -N (as above) and it was assigned job id 801, the files would be:


Logs are written to the job’s working directory ($PBS_O_WORKDIR) unless the user specifies otherwise.

Monitoring a Job

Monitoring a Torque job is done primarily using the Torque command “qstat.” For instance, to see a list of available queues:

$ qstat -q

To see the status of a specific queue:

$ qstat "queuename"

To see the full status of a specific job:

$ qstat -f  jobid

where jobid is the unique identifier for the job returned by the qsub command.

Deleting a Job

To delete a Torque job after it has been submitted,  use the qdel command:

$ qdel jobid

where jobid is the unique identifier for the job returned by the qsub command.

Monitoring Compute Nodes

To see the status of the nodes associated with a specific queue, use the torque command pbs_nodes(1) (qlso referred to as qnodes):

$ pbsnodes :queue_name

where  queue_name is the name of the queue  prefixed by a colon (:).  For example:

$ pbsnodes :copperhead

would display information about all of the nodes associated with the “copperhead” queue.  The output includes (for each node) the number of cores available (np= ).  If there are jobs running on the node, each one is listed in the (jobs= ) field.  This shows how many of the available cores are actually in use.

Parallel (MPI) Jobs

Parallel jobs are submitted to Torque in the manner described above except that you must first ask Torque to reserve the number of  processors (cores) you are requesting in your job.  This is accomplished using the -l switch to the qsub command:

For example:

$ qsub  -q copperhead -l procs=16

would submit my script requesting 16 processors (cores)  from the “copperhead” queue.  The script ( would look something like the following:

#! /bin/bash
module load openmpi
mpirun -hostfile $PBS_NODEFILE  my_mpi_prgram

If you need to specify a specify number of processors (cores) per compute host, you can append a colon (:) to the number of specified nodes and then append the number of processors per host.  For example, to request 16 total processors (cores) with only 4 per compute host, the syntax would be:

$ qsub  -q copperhead -l nodes=4:ppn=4

As described previously, options to qsub can be  specified directly in the script file.  For the example above, would look similar to the following:

#! /bin/bash

### Set the job name
#PBS -N MyJob

### Run in the queue named "copperhead"
#PBS -q copperhead
### Specify the number of cpus for your job.
#PBS -l nodes=4:ppn=4

### Load OpenMPI environment module.
module load openmpi

### execute mpirun
mpirun my_mpi_prgram

Examples of Torque Submit Scripts

NOTE: Additional sample scripts can be found online in /apps/torque/examples.

[1] Simple Job (1 CPU)

#! /bin/bash

#PBS -N MyJob
#PBS -q copperhead
#PBS -l procs=1

# Run program

[2] Parallel Job – 16 Processors (Using OpenMPI)

#! /bin/bash

#PBS -N MyJob
#PBS -q copperhead
#PBS -l procs=16

### load env for Infiniband OpenMPI
module load openmpi/1.10.0-ib

# Run the program "simplempi" with an argument of "30"
mpirun /users/joe/simplempi 30