HPC Clusters

URC provides a Redhat Linux based high performance computing environment that includes HPC clusters and systems of various capabilities and serving a variety of campus research communities.

 

COPPERHEAD

Copperhead is a general use cluster that is available for use in any faculty sponsored research project.

Copperhead Portal Login

Copperhead Statistics

  • 87 nodes / 1536 computing cores
  • 79 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
  • 4 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
    • Nvidia Tesla K80 GPU Accelerator
  • 1 node with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3
    • 768 GBs RAM (48 GBs / core)
    • EDR infiniband interconnect
  • 2 nodes with
    • quad Intel Xeon 2.1 GHz 16-core processors – E7-4850 v4
    • 2 TBs RAM (32 GBs / core)
    • EDR infiniband interconnect
  • 1 node with
    • quad Intel Xeon 2.1 GHz 16-core processors – E7-4850 v4
    • 4 TBs RAM (64 GBs / core)
    • EDR infiniband interconnect

In addition, Copperhead provides infrastructure support for a number of resources that were purchased by individual faculty or research groups to meet their specific needs.  These resources include:

  • 31 nodes / 648 cores (Steelhead)
  • 25 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
  • 2 nodes with
    • quad AMD 2.5 GHz 16-core processors – Opteron 6380
    • 512 GBs RAM (8 GBs / core)
  • 2 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3
    • 768 GBs RAM (48 GBs / core)
  • 1 node with
    • quad Intel Xeon 2.5 GHz 16-core processors – E7-8867 v3
    • 512 GBs RAM (8 GBs / core)
  • 1 node with
    • dual Intel Xeon 2.2 GHz 12-core processors – E5-2650 v4
    • 1.5 TBs RAM (64 GBs / core)
  • 1 node / 16 cores (Titan)
    • dual Intel 3.2 GHz 8-core processors – E5-2667 v3
    • 128 GBs RAM (8 GBs / core)
    • 8 Titan X (Pascal) GPUs
  • 2 nodes / 64 cores
    • dual Intel 2.6 GHz 16-core processors – E5-2697A v4
    • 256 GBs RAM (8 GBs / core)
    • EDR Infiniband interconnect

 

MAMBA

Mamba is a new cluster that will be dedicated to supporting the integration of HPC resources into the educational cirriculum.  It is currently planned to be available for a few pilot courses during the fall 2016 semester.

  •  12 nodes / 192 computing cores
  • 8 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
  • 4 nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
    • Nvidia Tesla K80 GPU Accelerator
  • 73 TBs dedicated, usable RAID storage (192 TBs raw)

Mamba User Notes

 

COBRA

Cobra is an NIH funded cluster that is primarily dedicated to a group of research projects that were described in the original proposal, however, eight percent of the resources are reserved for general use by any faculty research projects. In addition, the general use access includes any cpu time that is not used by the named projects.

COBRA Portal Login

COBRA Statistics

  • 59 nodes / 708 computing cores
    • dual Intel Xeon 2.93 GHz 6-core processors – X5670
    • 36 GBs RAM (3 GBs / core)
  • Gigabit Ethernet Interconnect
  • 1 node / 40 computing cores
    • quad Intel Xeon 2.40 GHz 10-core processors – E7-4870
    • 1 TB RAM (25 GBs / core)
    • 7 TBs local disk

 

PYTHON

Python is GPGPU cluster that is available to all faculty research projects that can take advantage of the GPU architecture.

PYTHON Statistics

  • 15 nodes / 180 computing cores / 45 GPUs
    • dual Intel Xeon 2.67 GHz 6-core processors – X5650
    • 12 GBs RAM (1 GB / core)
    • 3 Nvidia Fermi M2050 GPU cards
  • QDR Infiniband interconnect

 

TAIPAN (HADOOP)

TAIPAN Statistics

A 192-core Hadoop cluster (4 Masters, 16 Slaves) with a 87TB Hadoop Distributed File System (HDFS) available for use by faculty and graduate student researchers.

  • 16 nodes / 192 computing cores
    • dual Intel Xeon 2.93 GHz 6-core processors – X5670
    • 64 GBs RAM (5.3 GBs / core)
  • Gigabit Ethernet interconnect
  • QDR Infiniband (IPoIB)
  • 87 TBs local disk storage (~5.43 TBs / node)

 

SIDEWINDER

A small cluster primarily dedicated to the research of a single faculty member’s group. This cluster is an example of our partnership program in which individual faculty or research group may invest their funds in cluster resources which are added to the URC HPC environment.

SIDEWINDER Statistics

  • 30 nodes / 504 computing cores
  • 18 nodes with
    • dual Intel Xeon 2.4 GHz 8-core processors – E5-2665
    • 128 GBs RAM (8 GBs / core)
    • QDR infiniband interconnect
  • 9 nodes with
    • dual Intel Xeon 2.7 GHz 8-core processors – E5-2680
    • 128 GBs RAM (8 GBs / core)
    • QDR infiniband interconnect
  • 3 nodes with
    • dual Intel Xeon 2.7 GHz 12-core processors – E5-2697
    • 256 GBs RAM (10.6 GBs / core)
    • QDR infiniband interconnect
  • 73 TBs dedicated, usable RAID storage (96 TBs raw)

 

GEM

A small cluster dedicated to Geospatial Modelling in the Center for Applied Geographic Information Science (CAGIS)

GEM Statistics

  • 2 nodes / 64 computing cores
    • 4x AMD Opteron 2.0 GHz 8-core processors – 6128 HE
    • 64 GBs RAM (2 GBs / core)

 

Storage

URC provides a unified storage environment that is shared across all of the research clusters.

  • 300 TBs of general user storage space
  • 100 TBs of scratch storage
  • 333 TB Lustre parallel file system primarily used for large volume storage needs

Every user is provided with a 500 GB personal quota that is backed up for disaster recovery.

Shared storage volumes are available for research groups upon request.