The Mamba Cluster is a new HPC resource that is dedicated to supporting student access associated with class assignments. Mamba is available to students in a designated set of courses.
Access to Mamba
Mamba can be accessed via SSH to “mamba.urc.uncc.edu.” using NinerNET credentials (username and password).1,2 This will connect the user to the Mamba Interactive/Submit host which should be used to perform tasks such as transferring data using SCP or SFTP, and for code development.
From this node, a user can submit jobs requesting the following resources:
- General Compute Nodes (8 nodes with 16 cores/node = 128 procs total)
- GPU Compute Nodes (4 nodes with 16 cores and 2 GPUs/node = 64 procs and 8 GPUs total)
Jobs should always be submitted to the “mamba” queue unless directed otherwise by your instructor.
1 Please note EDUROAM is required for access oncampus and VPN is required for access from off campus.
2 DUO is required. (Setup Duo)
Each student is given a default storage quota of 150 GBs for their home directory located at /users/<username>. This volume is BACKED Up nightly. Users can check their current quota usage using the command “urcquota”.
Each class also has a shared folder located at /projects/class/<course id> which instructors may use to share information or data with class members.
Mamba uses environment modules to set up the user environment to use specific software packages. Additional details on modules can be found at https://urc.uncc.edu/faqs/applications
Mamba has access to three compiler suites: Gnu, Intel and PGI each of which can be accessed via the corresponding environment module. For example,
$ module load intel $ icc myprogram.c
Some instructors may require students to complete assignments using a specific compiler suite.
Mamba uses a batch scheduling environment to manage access to the computational resources. To submit a job to the scheduler, users must prepare a “submit script”. At its simplest, a submit script (my_script.sh) would look like this:
#! /bin/bash # ==== Main ====== /users/<username>/myprogram
And would be submitted to the cluster as follows:
qsub –N “MyJob” –q mamba my_script.sh
Submit scripts may also load any needed environment modules and set additional parameters specifying details of the desired execution environment (e.g. number of required processes, memory size, gpu access, etc.)
Additional example submit scripts are available for most applications in the folder /apps/torque/examples/ and further details may be found at:
=> Job Scheduling With Torque
Parallel Processing with OpenMPI
Mamba supports parallel processing via message passing. To access OpenMPI ,load the desired modules: e.g.
$ module load intel openmpi $ mpicc myprogram.c
And include a request for multiple processes in the submit script:
#! /bin/bash # ===== PBS OPTIONS ===== ### Set the job name #PBS -N "MyJob" ### Run in the queue named "mamba" #PBS -q mamba ### Specify the number of cpus for your job. #PBS -l nodes=4:ppn=4 # ==== Main ====== module load intel openmpi mpirun /users/<username>/myprogram
and submit with qsub:
$ qsub my_script.sh
The pbs options may also be set on the qsub command line as follows
$ qsub –N “MyJob” –q mamba –l nodes=4:ppn=4
In this example, the resource request is for 4 cores (or processes) on each of 4 compute nodes for a total of 16 processes.
The mamba cluster includes two nodes with Nvidia K80 GPUs. These resources may also be requested on the qsub command line:
$ qsub –l nodes=2:ppn=1:gpus=1
which would request one core and one gpu on each of 2 compute nodes.