Category: Announcements

Copperhead Cluster Announcement

Posted on Tuesday, August 2, 2016 at 1:50 pm

HPC Cluster Users:

This message is to announce the availability of a new cluster, Copperhead, which is available for use in any faculty sponsored research project.

Copperhead contains 68 compute nodes with a total of 1088 computing cores.  Four of these nodes also contain an NVidia Tesla K80 GPU accelerator. Each of the compute nodes includes 128 GBs of RAM and an EDR Infiniband high speed, low latency interconnect, and one of the nodes includes a higher memory configuration of 768 GBs.

This cluster will replace the existing Viper cluster which will be shut down at the beginning of September. In addition to replacing the hardware, we have also moved to Redhat 7, upgraded the version of many of our applications, and made a number of changes to the queueing system to improve throughput and give users greater control over their job submissions.

As you migrate from your work from Viper to Copperhead there are a few changes that you will need to make in your job submissions.

1)      The hostname for the Copperhead head node is hpc.uncc.edu.  This is the name you will use for ssh or scp connections both from on campus or off campus.  Unlike Viper which had separate hosts for job submission and interactive use.  The Copperhead head node combines these two functions.

2)      The default queue name that you should use for job submission is copperhead.   Use this queue name in your submit scripts and/or on the qsub command line.

3)      The syntax for requesting cores has changed slightly.  To request cores without specifying the number of cores per node use “procs=#” instead of “nodes=#”.    To include a specification of the number of cores per node continue to use “nodes=#:ppn=#”

4)      The default maximum wall time has been reduced to 8 hours.  Jobs that need to run longer than 8 hours must explicitly set the wall time request.   Jobs that exceed their requested wall time will be terminated, but jobs that request shorter run times will be given higher priority.

5)      The default maximum job memory space is 2 GBs per core/process requested.  Jobs that need larger memory spaces must explicitly set a memory request.  If you request more than 126 GBs of memory, your job will be scheduled on the large memory configuration node.  Jobs that exceed their requested memory will be terminated.

6)      If your job requires one or more GPUs, you must include a specific request in your submit script or on the qsub command line.

There are more detailed instructions for each of these changes available in the FAQs on our website at http://urc.uncc.edu/faqs/copperhead-user-notes/

 

Charles Price, Ph.D.
Director, University Research Computing
The University of North Carolina at Charlotte
9201 University City Blvd
Charlotte, NC  28223
(704) 687-5443          ceprice@uncc.edu

Setting Up and Using Globus / GridFTP for File Transfer

Posted on Tuesday, December 8, 2015 at 10:46 am

MEES Cluster Migration

Posted on Wednesday, July 31, 2013 at 12:46 pm

MEES Cluster Users:

Now that the Viper Cluster has been successfully moved to our new Research Computing server room, we will finally be able to complete the integration of the MEES cluster into the Viper cluster environment. This will allow us to create a single pool of compute nodes and unify the user and project storage spaces which should greatly improve the ease of use of our clusters.

We will begin this work at 8am on Thursday, August 8th and we expect to be finished by 5pm on Friday, August 9th. The MEES cluster will be unavailable during this time, but all of the other clusters, including Viper, will remain in production. Please plan your use of the MEES cluster with this outage in mind.

Once this work is completed, there will be several changes to the way in which you interact with the cluster. Additional details will be provided in a separate message next week.

We apologize for any inconvenience that this may cause,

Charles Price, Ph.D.

Summer Cluster Schedule

Posted on Tuesday, July 2, 2013 at 8:38 am

HPC Cluster Users:

Our new Research Computing server room has been completed, so we are planning to migrate several of our clusters to this location over the course of the summer. This will require an extended outage for each cluster as it is moved. Based on our current schedule, the Viper cluster will be unavailable beginning on Wednesday, July 24th at 8am. read more »