Educational Cluster

The Research Computing group provides a Redhat Linux based high performance computing environment in support of the educational cirriculum. There are two educational partitions: Centaurus, and GPU. There is 73 TBs dedicated, usable RAID storage (192 TBs raw). For more information on usage, please read the Centaurus/GPU User Notes

Centaurus

The Centaurus partition is dedicated to supporting the integration of HPC resources into the educational cirriculum. It is a traditional batch scheduling environment based on Slurm. It is a scaled down replica of our research computing environment.

  • 13 nodes / 208 computing cores
  • 12 general compute nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
  • 1 large-memory node with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 768 GBs RAM (48 GBs / core)
    • EDR infiniband interconnect

GPU

The GPU partition is also dedicated to supporting the integration of HPC resources into the educational cirriculum. It is exclusively made up of GPU compute nodes, for use by classes that require GPU computing resources.

  • 9 nodes / 136 computing cores / 24 GPUs
  • 8 GPU nodes with
    • dual Intel Xeon 3.2 GHz 8-core processors – E5-2667 v3 or v4
    • 128 GBs RAM (8 GBs / core)
    • EDR infiniband interconnect
    • 2 x Nvidia Tesla K80 GPU Accelerator
  • 1 GPU node with
    • dual Intel 2.60GHz 4-core processors – Xeon Silver 4112
    • 192GB RAM (24GB / core)
    • 8 x NVIDIA GeForce GTX-1080Ti GPUs