Never Ending Security

It starts all here

Tag Archives: Big Red II

Cyberinfrastructure Training and InfoShares about Big Red II, Karst and Mason.


Cyberinfrastructure Training and InfoShares

  • Getting Started on Big Red II, Karst and Mason

Getting started on Big Red II

Big Red II is Indiana University’s main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world’s fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).

Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:

  • 344 CPU-only compute nodes, each containing two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM
  • 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM

Big Red II runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). All compute nodes are connected through the Cray Gemini interconnect.

Following is a selection of IU Knowledge Base documents to help you get started using Big Red II. For additional documentation, search the Knowledge Base.For a printable summary of helpful Big Red II information, download the Big Red II cheatsheet (in PDF format). For slides and lab files from past high-performance computing workshops, see the Research Technologies CI Training page.

On this page:


System overview

Accounts, access, and user policies

Programming environment

Running jobs

X forwarding and interactive jobs

Application-specific help

Getting help

Support for research computing systems at Indiana University is provided by various units within the Systems area of theResearch Technologies division of UITS:

To ask any other question about Research Technologies systems and services, use the Request help or information form.


Getting started on Karst

Karst (karst.uits.iu.edu) is Indiana University’s newest high-throughput computing cluster. Designed to deliver large amounts of processing capacity over long periods of time, Karst’s system architecture provides IU researchers the advanced performance needed to accommodate high-end, data-intensive applications critical to scientific discovery and innovation. Karst also serves as a “condominium cluster” environment for IU researchers, research labs, departments, and schools.

Karst is equipped with 256 compute nodes, plus 16 dedicated data nodes for separate handling of data-intensive operations. All nodes are IBM NeXtScale nx360 M4 servers, each equipped with two Intel Xeon E5-2650 v2 8-core processors. Each compute node has 32 GB of RAM and 250 GB of local disk storage. Each data node has 64 GB of RAM and 24 TB of local storage. All nodes run Red Hat Enterprise Linux (RHEL) 6 and are connected via 10-gigabit Ethernet to the IU Science DMZ.

Karst provides batch processing and node-level co-location services that make it well suited for running high-throughput and data-intensive parallel computing jobs. Karst uses TORQUE integrated with Moab Workload Manager to coordinate resource management and job scheduling. The Data Capacitor II and Data Capacitor Wide Area Network (DC-WAN) parallel file systems are mounted for temporary storage of research data. The Modules environment management package on Karst allows users to dynamically customize their shell environments.

Following are some useful documents to help you get started running compute jobs on Karst:

On this page:


System overview

Accounts, access, and user policies

Programming environment

Running jobs

X forwarding and interactive jobs

Application-specific help

Getting help

Support for research computing systems at Indiana University is provided by various units within the Systems area of theResearch Technologies division of UITS:

To ask any other question about Research Technologies systems and services, use the Request help or information form.


Getting started on Mason

Mason (mason.indiana.edu) at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks for researchers using genome assembly software (particularly software suitable for assembly of data from next-generation sequencers), large-scale phylogenetic software, or other genome analysis applications that require large amounts of computer memory. At IU, Mason accounts are available to IU faculty, postdoctoral fellows, research staff, and students involved in genome research. IU educators providing instruction on genome analysis software, and developers of such software, are also welcome to use Mason. IU has also made Mason available to genome researchers from the National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) project.

Mason consists of 18 Hewlett-Packard (HP) DL580 servers, each containing four Intel Xeon L7555 8-core processors and 512 GB of RAM, and two HP DL360 login nodes, each containing two Intel Xeon E5-2600 processors and 24 GB of RAM. The total RAM in the system is 9 TB. Each server chassis has a 10-gigabit Ethernet connection to the other research systems at IU and the XSEDE network (XSEDENet).

Mason nodes run Red Hat Enterprise Linux (RHEL 6.x). The system uses TORQUE integrated with Moab Workload Manager to coordinate resource management and job scheduling. The Data Capacitor II and Data Capacitor Wide Area Network (DC-WAN) parallel file systems are mounted for temporary storage of research data. The Modules environment management package on Mason allows users to dynamically customize their shell environments.

Following is a selection of IU Knowledge Base documents to help you get started using Mason. For additional documentation, search the Knowledge Base. For slides and lab files from past high-performance computing workshops, see the Research Technologies CI Training page.

On this page:


System overview

Accounts, access, and user policies

Programming environment

Running jobs

X forwarding and interactive jobs

Application-specific help

Getting help

Support for research computing systems at Indiana University is provided by various units within the Systems area of theResearch Technologies division of UITS:

To ask any other question about Research Technologies systems and services, use the Request help or information form.


Cyberinfrastructure Training and InfoShares