Research Clusters

The Research Computing group provides a Redhat Linux based high performance computing environment that includes HPC research clusters and systems of various capabilities and serving a variety of campus research communities.

 

COPPERHEAD

Copperhead is a general use cluster that is available for use in any faculty sponsored research project. For more information about submitting jobs to Copperhead, check out the Copperhead User Notes.

COPPERHEAD Statistics

  • 96 nodes / 2060 computing cores:
  • 63 nodes with
    • dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3 or v4
    • 256GB RAM (8GB / core)
    • EDR infiniband interconnect
  • 4 nodes with
    • dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3
    • 128GB RAM (8GB / core)
    • EDR infiniband interconnect
    • 2 Nvidia Tesla K80 GPUs
  • 2 nodes with
    • dual Intel 2.60GHz 4-core processors – Xeon Silver 4112
    • 192GB RAM (24GB / core)
    • 8 NVIDIA GeForce GTX-1080Ti GPUs
  • 1 node with
    • dual Intel 3.2GHz 8-core processors -Xeon E5-2667 v3
    • 768GB RAM (48GB / core)
    • EDR infiniband interconnect
  • 1 node with
    • quad Intel 2.10GHz 16-core processors – Xeon E7-4850 v4
    • 4TB RAM (62.5GB / core)
    • EDR infiniband interconnect
  • 2 nodes with
    • dual Intel 2.6GHz 16-core processors – Xeon E5-2697A v4
    • 256GB RAM (8GB / core)
    • EDR Infiniband interconnect
  • 23 nodes with
    • dual Intel 3.0GHz 18-core processors – Xeon Gold 6154
    • 388GB RAM (10.7GB / core)
    • EDR Infiniband interconnect

 

TAIPAN (HADOOP)

A 192-core Hadoop cluster (4 Masters, 16 Slaves) with a 87TB Hadoop Distributed File System (HDFS) available for use by faculty and graduate student researchers. We use Cloudera’s Distribution of Hadoop (CDH) to provide the following Hadoop services: HBase, Hive, Hue, Impala, Kudu, Oozie, Spark and Spark2, Sqoop 2, and YARN (w/ MapReduce2).

TAIPAN Statistics

  • 16 nodes / 192 computing cores:
    • dual Intel 2.93GHz 6-core processors – Xeon X5670
    • 64GB RAM (5.3GB / core)
  • Gigabit Ethernet interconnect
  • 87 TBs local disk storage (~5.43TB / node)

 

In addition to Copperhead and Taipan, we provide infrastructure support through our partnership program for a number of resources that were purchased by individual faculty or research groups to meet their specific needs. These resources include:

 

HAMMERHEAD

HAMMERHEAD Statistics

  • 3 nodes / 192 cores:
  • 2 nodes with
    • quad Intel 2.1GHz 16-core processors – Xeon E7-4850 v4
    • 2TB RAM (32GB / core)
    • EDR infiniband interconnect
  • 1 node with
    • quad Intel 2.1GHz 16-core processors – Xeon E7-4850 v4
    • 4TB RAM (64GB / core)
    • EDR infiniband interconnect

 

STEELHEAD

STEELHEAD Statistics

  • 60 nodes / 1412 cores:
  • 40 nodes with
    • dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3 or v4
    • 128GB RAM (8GB / core)
    • QDR infiniband interconnect
  • 13 nodes with
    • dual Intel 3.0GHz 18-core processors – Xeon Gold 6154
    • 388GB RAM (10.7GB / core)
    • EDR Infiniband interconnect
  • 2 (interactive) nodes with
    • quad AMD 2.5 GHz 16-core processors – Opteron 6380
    • 512GB RAM (8GB / core)
  • 2 (interactive) nodes with
    • dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3
    • 768GB RAM (48GB / core)
  • 1 (interactive) node with
    • quad Intel 2.5 GHz 16-core processors – Xeon E7-8867 v3
    • 512 GBs RAM (8 GBs / core)
  • 1 (interactive) node with
    • dual Intel 2.2GHz 12-core processors – Xeon E5-2650 v4
    • 1.5TB RAM (64GB / core)
  • 1 (interactive) node with
    • dual Intel 2.2GHz 10-core processors – Xeon E5-2630 v4
    • 1TB RAM (52GB / core)

 

TITAN

TITAN Statistics

  • 2 nodes / 24 cores:
  • 1 node with
    • dual Intel 3. GHz 8-core processors – Xeon E5-2667 v3
    • 128GB RAM (8GB / core)
    • 8 NVIDIA Titan X (Pascal) GPUs
  • 1 node with
    • dual Intel 2.60GHz 4-core processors – Xeon Silver 4112
    • 192GB RAM (24GB / core)
    • 8 NVIDIA GeForce GTX-1080Ti GPUs

 

SIDEWINDER

SIDEWINDER Statistics

  • 29 nodes / 488 computing cores
  • 17 nodes with
    • dual Intel 2.4 GHz 8-core processors – Xeon E5-2665
    • 128 GBs RAM (8 GBs / core)
    • QDR infiniband interconnect
  • 9 nodes with
    • dual Intel 2.7 GHz 8-core processors – Xeon E5-2680
    • 128 GBs RAM (8 GBs / core)
    • QDR infiniband interconnect
  • 3 nodes with
    • dual Intel 2.7 GHz 12-core processors – Xeon E5-2697
    • 256 GBs RAM (10.6 GBs / core)
    • QDR infiniband interconnect
  • 73 TBs dedicated, usable RAID storage (96 TBs raw)

 

STORAGE

URC provides a unified storage environment that is shared across all of the research clusters.

  • 940 TB of general user storage space
  • 2.7 PB of InfiniBand connected Lustre distributed file system storage space used for scratch and large volume storage needs

Each user is provided with:

  • a 500 GB home directory that is backed up for disaster recovery 
  • up to 10TB of temporary scratch storage space

Scratch space is for holding temporary data needed by currently running jobs only and is not meant to hold critical data long term.  Note that scratch is not backed up and any failure will result in data loss.  DO NOT store important data in scratch.  If scratch fills, URC staff may delete older data.

Shared storage volumes are available for research groups upon request.

NEVER modify the permissions on your /home or /scratch directory.  If you need assistance, please contact us.