Environment

Red Hat LinuxWe have standardized on Red Hat Enterprise Linux in order to provide a stable platform on which many vendors have certified their software to run. Although a commercial company, Red Hat creates, maintains, and contributes to many free software projects and has also acquired several proprietary software packages and released their source code under mostly GNU GPL while holding copyright under single commercial entity and selling looser licenses.

Adaptive ComputingWe use TORQUE to provide control over batch jobs and distributed computing resources. It is an advanced open-source product based on the original PBS project, and incorporates the best of both community and professional development. It incorporates significant advances in the areas of scalability, reliability, and functionality and is currently in use at tens of thousands of leading government, academic, and commercial sites throughout the world.  We have integrated TORQUE with Adaptive Computing’s Moab, a workload manager that intelligently places workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives.

HadoopApache Hadoop is a Java-based software framework developed and maintained by the Apache Software Foundation that supports data-intensive distributed applications under a free license. It is designed to scale up from a single server to thousands of machines, each offering local computation and storage. Hadoop was inspired by Google’s MapReduce and the Google File System (GFS).

ClouderaWe have standardized on using Cloudera’s Distribution of Hadoop (CDH) on our clusters, to provide the following Hadoop services: HDFS, HBase, Hive, Hue, Impala, Oozie, Spark and Spark2, Sqoop 2, and YARN (w/ MapReduce2).

Intel, Dell, and Nvidia CudaMost of the Research Computing clusters are made up of Intel Xeon based Dell servers. We have a mix of models and generations; anywhere from Dell Poweredge R410s to R930s. We offer compute nodes with different compute capabilities, so if you need large memory nodes or GPU nodes, we’ve got you covered. Our GPU nodes are made up of NVIDIA Tesla K80’s, and our large memory nodes range from 1TB up to 4TB of RAM in a single system. For a more detailed overview of the types of systems that make up each individual cluster, please check out our “Research Clusters” and “Education Clusters” pages. Research Computing provides an extensive set of applications and codes for use by our researchers on the cluster.

We believe in using Free/Open-Source Software (F/OSS) in our environment whenever possible.