Skip to main content

Compute Resources

Explore the range of high-performance computing resources available for genomics, cancer research, and bioinformatics at NUS and beyond.


🏒 CSI Compute Resources​

GeDAC Cloudflow​

TL;DR:

GeDAC Cloudflow lets CSI and NUS researchers run any Nextflow pipeline easilyβ€”no advanced bioinformatics or IT skills needed. Focus on your research, not on setup.

Sign up β†’

GeDAC Cloudflow is a fully managed, cloud-based platform for analyzing large-scale sequencing data. It helps researchers process and manage genomics datasets quickly and securely, with simple scaling and easy collaboration.

GeDAC Server​

TL;DR:

GeDAC Server is a dedicated, high-performance computer with powerful CPUs, lots of memory, and two advanced GPUs. It’s easy to use and ideal for any researcher who needs fast computing for data analysis, deep learning, or large genomics projects.

Request Project Consultation β†’

Overview:
GeDAC Server is a high-performance, single-node system designed for GPU-accelerated research at the Cancer Science Institute, NUS. It features:

  • CPU: 64-core/128-thread
  • RAM: 2TB
  • GPU: 2Γ— NVIDIA H100
  • Storage: 240TB home, 49TB SSD scratch

Best suited for researchers needing a dedicated environment for GPU-intensive workflows, large datasets, or custom interactive analysis.


πŸ›οΈ NUS Compute Resources​

Access robust computational resources to accelerate discoveries in bioinformatics and cancer science research.

NUS Vanda – High-Throughput Computing (HTC) Cluster​

TL;DR:

NUS Vanda is an in-house HTC cluster for general computational workloads. No Infiniband, but substantial CPU-based computing power for parallel processing.

Book Vanda Cluster β†’

Key Specs:

  • CPU: 336Γ— Intel Xeon Platinum 8452Y (36 cores each)
  • RAM per node: 256GB DDR5

Vanda supports parallel execution across 10–15 nodes, each running independent tasks. No high-speed interconnects (e.g., InfiniBand).


NUS Hopper – AI-Optimized High-Performance Cluster​

TL;DR:

NUS Hopper provides cutting-edge hardware for AI-driven research, deep learning, and computational biology.

Book Hopper Cluster β†’

Key Specs:

  • GPU Nodes: 6, each with:
    • 8Γ— NVIDIA H100-80GB GPUs
    • 2Γ— Intel 8480+ CPUs (56 cores each)
    • 2TB RAM
    • 400Gb Infiniband NDR & 100GbE storage fabric
  • Scratch Storage: 200TB

Supports containerized workflows (Singularity, Docker) for scalable, reproducible research.


🌏 External Compute Resources​

NSCC ASPIRE2A – National Supercomputing Resource​

TL;DR:

ASPIRE2A offers state-of-the-art HPC with massive parallelism, fast interconnects, and large-scale storage, ideal for genomics, simulations, and data-intensive science.

ASPIRE2A is best suited for large-scale, CPU-intensive workloads such as whole-genome analysis, population-scale studies, and high-throughput simulations. It offers massive parallelism with over 100,000 CPU cores, fast interconnects, and a large shared file systemβ€”ideal for workflows requiring high memory and I/O performance across many nodes.

Apply for NSCC Project β†’

Key Specs:

  • System: AMD-Based Cray EX supercomputer
  • Storage: 25PB GPFS, 10PB Lustre
  • Interconnect: Slingshot high-speed
  • GPU Nodes: 4Γ— NVIDIA A100-40G SXM per node

Pricing Table:

Resource TypeRIE-Funded (S$)Non-RIE (Standard) (S$)Non-RIE (Premium) (S$)
CPU Core Hours0.0060.0220.033
GPU Card Hours - A100 (40GB)0.792.433.11
GPU Card Hours - H1001.264.404.80
HDD Storage GB-month0.0210.0290.029

NSCC ASPIRE2A+ AI – National Supercomputing Resource​

TL;DR:

ASPIRE2A+ AI is optimized for GPU-accelerated tasks like deep learning, image analysis, and AI model training.

ASPIRE 2A+ AI is optimized for GPU-accelerated tasks like deep learning, image analysis, and AI model training. With high-memory NVIDIA A100 GPUs and local NVMe storage, it’s ideal for researchers running TensorFlow, PyTorch, or GPU-enabled genomics tools that benefit from fast compute and data access within a node.

Apply for NSCC Project β†’

Key Specs:

  • DGX SuperPOD: 40 DGX H100 systems (320 NVIDIA H100 GPUs)
  • Networking: 400 Gb/s NVIDIA InfiniBand
  • Memory: 2TB per DGX H100
  • Storage: 27.5PB home, 2.5PB scratch

ASPIRE 2A vs ASPIRE 1:

FeatureASPIRE 2A vs ASPIRE 1
Processing Cores>3.5Γ— more
Physical Footprint5Γ— reduction (1.5Γ— fewer nodes)
NVIDIA GPUs2Γ— more
Computational Power7Γ— more

☁️ AWS Cloud Platform​

TL;DR:

AWS is ideal for workloads needing rapid scaling, flexible compute, or short-term high-performance resources. Best for fast turnaround when shared HPC queues are slow, but costs rise with large datasets.

AWS Cloud Platform is well-suited for bioinformatics workloads that require rapid scaling, flexible compute configurations, or short-term access to high-performance resources. It is especially advantageous when fast turnaround is critical and queue times on shared HPC systems are a bottleneck. While AWS offers flexibility and speed, storage and data transfer costs increase with dataset size, making it most cost-effective for small to mid-sized workloads. Budget considerations are key, and effective cost management requires proper automation, monitoring, and workflow orchestration.


πŸ“Š Resource Comparison​

AttributeGeDAC ServerGeDAC CloudflowNUS VandaNUS HopperNSCC ASPIRE2ANSCC ASPIRE2A+ AIAWS Cloud Platform
TypeDedicated GPU SystemManaged Cloud PlatformHTCHPC, AI-optimizedNational HPCNational AI-optimized HPCCloud Platform
Target WorkloadDeep learning, GPU pipelines, large-memory genomics/cancerNextflow pipelines, genomicsGeneral CPUGPU-accelerated AI/ML, deep learningLarge-scale HPCGPU-accelerated AI/ML, deep learningFlexible, scalable compute
CPU1Γ— 64-core/128-threadCloud-based, managed336Γ— Xeon 8452Y (36 cores)2Γ— Xeon 8480+ (56 cores) per node, 6 nodesAMD Cray EX2Γ— Xeon per DGX H100, 40 systemsConfigurable
RAM/Node2TBCloud-based, managed256GB DDR52TB per nodeVaries2TB per DGX H100Configurable
GPU2Γ— NVIDIA H100Cloud-based, managedNone8Γ— NVIDIA H100 80GB per node, 6 nodes4Γ— NVIDIA A100 40GB SXM/node8Γ— NVIDIA H100 per DGX H100, 40 systemsConfigurable
InterconnectStandard EthernetCloud-based, managedStandard Ethernet400Gb Infiniband NDR + 100GbEHPE Slingshot400Gb/s NVIDIA InfiniBand10/25/100GbE, EFA, etc.
Storage240TB home, 49TB SSD scratchCloud-based, managedNot specified200TB HP scratch25PB GPFS + 10PB Lustre27.5PB home, 2.5PB scratchEBS, S3, FSx, etc.
LocationCSI, NUSCloud (CSI, NUS)NUSNUSOff-campus (NSCC)Off-campus (NSCC)Cloud (AWS region)
Best ForDedicated, high-performance GPU workflows, large datasets, interactive analysisEasy, scalable Nextflow pipelinesParallel CPU tasks, moderate useDeep learning, AI, large-memoryLarge-scale simulations, genomicsLarge-scale AI, deep learning, GPU-accelerated genomicsRapid scaling, short-term compute, flexible workloads
AccessProject consultationSelf-signupInternal NUSInternal NUSNSCC accountNSCC accountAWS account

Need help choosing or optimizing resources?
Contact our GeDaC team for guidance β†’