This page uses javascript to help render elements, if you have problems please enable javascript.
 
You are now inside the main content area
 
 
 
left col

Turing

The Center maintains a supercomputer cluster known as Turing.
Faculty may request access for themselves or on behlaf of their students by subitting the  TSC Access Request  form.
ajax space
right col
 
 
 
 
left col
Operating System

The cluster runs on Rocky Linux, a robust and stable Linux distribution.

Head Nodes

The system includes a virtual head node to manage computing resources.

Storage Nodes

Two types of storage are available:

  • Flash Storage for high-speed data access.
  • Capacity Storage for larger data sets.
CPU Computing Nodes
  • 40 compute nodes dedicated to processing.
  • SLURM queue management for different job scheduling and workload distribution.
GPU Computing Nodes

The system includes multiple GPU configurations:

  • Tesla T4 for fundamental GPU computing.
  • A100 (single) for undergraduate research.
  • A100 (2-way NVLink) for research requiring enhanced GPU communication.
  • H200 (4-way NVLink) for very large memory applications, AI training, and intensive research.
High-Speed Interconnect

The cluster is equipped with InfiniBand and 10Gbps networking for efficient data transfer between nodes.


Applications Available in Turing

  • Apache Subversion
  • CMake
  • Conformer–Rotamer Ensemble Sampling Tool (CREST)
  • Dalton
  • DeePMD-kit
  • DeepModeling
  • git
  • GNU Multiple Precision Arithmetic Library (GMP)
  • GNU nano 
  • GUI for projector-augmented wave (G PAW)
  • Open MPI
  • Orca Slicer
  • python3
  • R
  • TensorFlow for Python
  • VASP
  • WebMo 

 

people talking about tech people talking about tech

right col