Copperhead Cluster Announcement

Posted on Tuesday, August 2, 2016 at 1:50 pm

HPC Cluster Users:

This message is to announce the availability of a new cluster, Copperhead, which is available for use in any faculty sponsored research project.

Copperhead contains 68 compute nodes with a total of 1088 computing cores.  Four of these nodes also contain an NVidia Tesla K80 GPU accelerator. Each of the compute nodes includes 128 GBs of RAM and an EDR Infiniband high speed, low latency interconnect, and one of the nodes includes a higher memory configuration of 768 GBs.

This cluster will replace the existing Viper cluster which will be shut down at the beginning of September. In addition to replacing the hardware, we have also moved to Redhat 7, upgraded the version of many of our applications, and made a number of changes to the queueing system to improve throughput and give users greater control over their job submissions.

As you migrate from your work from Viper to Copperhead there are a few changes that you will need to make in your job submissions.

1)      The hostname for the Copperhead head node is hpc.uncc.edu.  This is the name you will use for ssh or scp connections both from on campus or off campus.  Unlike Viper which had separate hosts for job submission and interactive use.  The Copperhead head node combines these two functions.

2)      The default queue name that you should use for job submission is copperhead.   Use this queue name in your submit scripts and/or on the qsub command line.

3)      The syntax for requesting cores has changed slightly.  To request cores without specifying the number of cores per node use “procs=#” instead of “nodes=#”.    To include a specification of the number of cores per node continue to use “nodes=#:ppn=#”

4)      The default maximum wall time has been reduced to 8 hours.  Jobs that need to run longer than 8 hours must explicitly set the wall time request.   Jobs that exceed their requested wall time will be terminated, but jobs that request shorter run times will be given higher priority.

5)      The default maximum job memory space is 2 GBs per core/process requested.  Jobs that need larger memory spaces must explicitly set a memory request.  If you request more than 126 GBs of memory, your job will be scheduled on the large memory configuration node.  Jobs that exceed their requested memory will be terminated.

6)      If your job requires one or more GPUs, you must include a specific request in your submit script or on the qsub command line.

There are more detailed instructions for each of these changes available in the FAQs on our website at http://urc.uncc.edu/faqs/copperhead-user-notes/

 

Charles Price, Ph.D.
Director, University Research Computing
The University of North Carolina at Charlotte
9201 University City Blvd
Charlotte, NC  28223
(704) 687-5443          ceprice@uncc.edu