Research Clusters

The Research Computing group provides a Redhat Linux based high performance computing environment that includes HPC research clusters and systems of various capabilities and serving a variety of campus research communities.

ORION (Slurm)

Orion is a general use Slurm partition that is available for use in any faculty sponsored research projects. For more information about submitting jobs to Orion, check out the Orion (Slurm) User Notes.

ORION Statistics

  • 57 nodes / 2484 computing cores:
  • 36 nodes with
    • Dual 24-Core Intel Xeon Gold 6248R CPU @ 3.00GHz (48 cores / node)
    • 384GB RAM (8GBs / core)
    • 100GBit EDR Infiniband Interconnect
  • 21 nodes with
    • Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
    • 388GB RAM (10.7GB / core)
    • 100Gbit EDR Infiniband Interconnect

GPU (Slurm)

GPU is a general use Slurm partition made up of several GPU compute nodes, that is available for use in any faculty sponsored research projects. For more information about submitting jobs to the GPU partition, check out the "Submitting a GPU Job" section in the Orion & GPU (Slurm) User Notes.

GPU Statistics

  • 5 nodes / 80 computing cores:
  • 1 node with
    • Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
    • 192GB RAM (12GB / core)
    • 8 x Titan V GPUs (12GB HBM2 RAM per GPU)
    • 100Gbit EDR Infiniband Interconnect
  • 2 nodes with
    • Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
    • 192GB RAM (12GB / core)
    • 4 x Titan RTX GPUs (24GB GDDR6 RAM per GPU)
    • 100Gbit EDR Infiniband Interconnect
  • 2 nodes with
    • Dual 8-Core Intel Xeon Silver 4215R CPU @ 3.20GHz (16 cores total)
    • 192GB RAM (12GB / core)
    • 4 x Tesla V100s GPUs (32GB HBM2 RAM per GPU)
    • 100Gbit EDR Infiniband Interconnect

COPPERHEAD (PBS)

Copperhead is a general use cluster that is available for use in any faculty sponsored research projects. For more information about submitting jobs to Copperhead, check out the Copperhead User Notes.

COPPERHEAD Statistics

  • 35 nodes / 592 computing cores:
  • 27 nodes with
    • Dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3 or v4
    • 256GB RAM (8GB / core)
    • 100Gbit EDR Infiniband Interconnect
  • 4 nodes with
    • Dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3
    • 128GB RAM (8GB / core)
    • 100Gbit EDR Infiniband Interconnect
    • 2 Nvidia Tesla K80 GPUs
  • 2 nodes with
    • Dual Intel 2.60GHz 4-core processors – Xeon Silver 4112
    • 192GB RAM (24GB / core)
    • 8 NVIDIA GeForce GTX-1080Ti GPUs
  • 1 node with
    • Dual Intel 3.2GHz 8-core processors - Xeon E5-2667 v3
    • 768GB RAM (48GB / core)
    • 100Gbit EDR Infiniband Interconnect
  • 1 node with
    • Quad Intel 2.10GHz 16-core processors – Xeon E7-4850 v4
    • 4TB RAM (62.5GB / core)
    • 100GBit EDR Infiniband Interconnect

 

CEPHEUS (HADOOP)

A 192-core Hadoop cluster (8 Master/Data nodes, 8 Worker nodes) with a 290TB Hadoop Distributed File System (HDFS) available for use by faculty and graduate student researchers. We use Cloudera’s Distribution of Hadoop (CDH) to provide the following Hadoop services: HBase, Hive, Hue, Impala, Kudu, Oozie, and Spark2, Sqoop 2, and YARN (w/ MapReduce2).

CEPHEUS Statistics

  • 16 nodes / 192 computing cores (384 threads)
  • 8 master/data nodes with
    • Intel Xeon E5-2640 2.00GHz 8-core processors
    • 64GB RAM
    • 36TB Storage
  • 8 worker nodes with
    • dual Intel Xeon E5-2667 3.20GHz 8-core processors (16 cores per node)
    • 128GB RAM
  • 100Gbit EDR Infiniband Interconnect
  • 290TB HDFS Storage

STORAGE (NFS & Lustre)

URC provides a unified storage environment that is shared across all of the research clusters.

  • 940 TB of general user (NFS) storage space
  • 2.7 PB of InfiniBand connected Lustre distributed file system non-backed up storage space used for scratch and large volume storage needs

Each user is provided with:

  • a 500 GB home directory that is backed up for disaster recovery 
  • up to 10TB of temporary scratch storage space

Please note that quota extensions are not available for home directories or scratch space.

Scratch space is for holding temporary data needed by currently running jobs only and is not meant to hold critical data long term.  Note that scratch is not backed up and any failure will result in data loss.  DO NOT store important data in scratch.  If scratch fills, URC staff may delete older data.

Shared storage volumes in /projects and /nobackup are available for research groups upon request and must be owned by a faculty member. (Subject to available space.)

Although URC backs up some file spaces, be sure to maintain an additional copy of critical data outside the cluster.

  • Home directories have a 7-day, 4-week backup.
  • /projects have a 7-day backup.
  • /nobackup and /scratch are not backed up.

NEVER modify the permissions on your /home or /scratch directory.  If you need assistance, please contact us.


In addition to Copperhead and Cepheus, we manage and provide infrastructure support for a number of clusters that were purchased by individual faculty or research groups to meet their specific needs. These resources include:

SERPENS (Slurm)

SERPENS Statistics

  • 12 nodes / 576 computing cores, each with:
    • Dual 24-Core Intel Xeon Gold 6248R CPU @ 3.00GHz (48 cores / node)
    • 384GB RAM (8GBs / core)
    • 100GBit EDR Infiniband Interconnect
  • 1 (Interactive) node with:
    • Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
    • 384GB RAM (10.67GBs / core)
  • 73TB dedicated, usable RAID storage (96TB raw)

PISCES (SLURM)

PISCES Statistics

  • 31 nodes / 616 cores:
  • 25 nodes with
    • Dual 8-Core Intel Xeon E5-2667 CPU @ 3.2GHz (16 cores / node)
    • 128GB RAM (8GB / core)
    • 100GBit EDR Infiniband Interconnect
  • 6 nodes with
    • Dual 18-Core Intel Xeon Gold 6154 CPU @ 3.00GHz (36 cores / node)
    • 388GB RAM (10.7GB / core)
    • 100Gbit EDR Infiniband Interconnect

DRACO (SLURM)

DRACO Statistics

  • 13 nodes / 336 cores:
  • 8 nodes with
    • dual Intel 3.2GHz 8-core processors – Xeon E5-2667 v3 or v4
    • 128GB RAM (8GB / core)
    • 100GBit EDR Infiniband Interconnect
  • 4 nodes with
    • dual Intel 3.0GHz 18-core processors – Xeon Gold 6154
    • 388GB RAM (10.7GB / core)
    • 100GBit EDR Infiniband interconnect
  • 1 nodes with
    • quad Intel 2.1GHz 16-core processors – Xeon E7-4850 v4
    • 2TB RAM (32GB / core)
    • 100GBit EDR infiniband interconnect

PEGASUS (SLURM)

PEGASUS Statistics

  • 3 nodes / 100 cores:
  • 1 nodes with
    • dual Intel 3.0GHz 18-core processors – Xeon Gold 6154
    • 388GB RAM (10.7GB / core)
    • 100Gbit EDR Infiniband Interconnect
  • 2 nodes with
    • Dual Intel 2.6GHz 16-core processors – Xeon E5-2697A v4
    • 256GB RAM (8GB / core)
    • 100Gbit EDR Infiniband Interconnect

HERCULES (SLURM)

TITAN Statistics

  • 3 nodes / 44 cores:
  • 1 node with
    • Dual 8-Core Intel Xeon E5-2667 CPU @ 3.00GHz (16 cores total)
    • 128GB RAM (8GB / core)
    • 8 NVIDIA Titan X (Pascal) GPUs
  • 1 node with
    • Dual 4-Core Intel Xeon Silver 4112 CPU @ 2.60GHz (8 cores total)
    • 192GB RAM (24GB / core)
    • 8 NVIDIA GeForce GTX-1080Ti GPUs
  • 1 node with
    • Dual 10-Core Intel Xeon Silver 4114 CPU @ 2.20GHz (20 cores total)
    • 192GB RAM (9.6GB / core)
    • 2 x Titan V GPUs (12GB HBM2 RAM per GPU)