Skip Navigation

Supercomputer

Skip Side Navigation

High Performance Computing (Schooner)

Highlights

  • 766 compute nodes
  • ~902.5 TFLOPS (trillions of calculations per second)
  • 18,408 CPU cores
  • ~57 TB RAM
  • ~425 TB usable storage
  • Two networks (Infiniband and Ethernet)

Detailed Hardware Specifications:

OSCER Supercomputer Compute Nodes (Current)

QtyCPUsRAM (GB)QtyCPUsRAM (GB)
285dual Intel Xeon Haswell E5-2650 v332142dual Intel Xeon Haswell E5-2670 v364
1dual Intel Xeon Haswell E5-2650 v3965dual Intel Xeon Haswell E5-2670 v3128
72dual Intel Xeon Haswell E5-2660 v33228dual Intel Xeon Broadwell E5-2650 v464
6dual Intel Xeon Haswell E5-2650L v3641dual Intel Xeon Broadwell E5-2650 v432
6dual Intel Xeon Haswell E5-2630 v31287dual Intel Xeon Haswell E5-2640 v396
12dual Intel Xeon Skylake Gold 6140961dual Intel Xeon Skylake Gold 6152384
6dual Intel Xeon Cascade Lake Gold 6230R965dual Intel Xeon Skylake Gold 613296
30dual Intel Xeon Cascade Lake Gold 62309624dual Intel Xeon Cascade Lake Gold 6230192
3dual Intel Xeon Ice Lake Gold 63301281dual Intel Xeon Ice Lake Gold 6330256
32dual Intel Xeon Ice Lake Gold 6338128   
1quad Intel Xeon Haswell E7-4809 v330721quad Intel Xeon Haswell E7-4809 v31024
1quad Intel Xeon Haswell E7-4830 v420481quad Intel Xeon Cascade Lake 62301536
18dual AMD EPYC Rome 74522561dual Intel Xeon Ice Lake 8352S2048
2dual AMD EPYC Milan 75435121dual AMD EPYC Milan 75431024
56dual Sandy Bridge E5-26503215dual Sandy Bridge E5-265064
5dual Intel Xeon Phi Knights Landing 7210483dual Intel Xeon Phi Knights Landing 723048
  • Additional capacity has been purchased and will soon be deployed: 34 nodes, 2160 cores, 8.5 TB RAM and 151.9 TFLOPs peak, for a total of 800 compute nodes, 20,568 CPU cores, ~66 TB RAM and 1054.4 TFLOPs peak (that is, just over 1 PFLOPs).

OSCER Supercomputer Compute Nodes (Purchased, to be deployed in 2022)

QtyCPUsRAM (GB)QtyCPUsRAM (GB)
15dual Intel Xeon Ice Lake 63381285dual Intel Xeon Ice Lake 6338256
4dual Intel Xeon Ice Lake 6338512   
3dual AMD EPYC Rome 74521284dual AMD EPYC Rome 7452256
2dual AMD EPYC Milan 75135121dual AMD EPYC Rome 73521024
  • Accelerator-capable Compute nodes
    • PowerEdge R730
    • 6 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual NVIDIA K20M accelerator cards (3 OSCER, 3 condominium)
    • 12 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual Intel Xeon Phi MIC 31S1P accelerator cards (all OSCER)
    • 13 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, no accelerator cards (all OSCER)
    • 5 × dual Haswell E5-2670v3 12-core 2.3 GHz, 128 GB RAM, no accelerator cards (all OSCER)
    • 2 NVIDIA V100 GPU cards (owned by a researcher).
    • 18 NVIDIA A100 GPU cards (6 owned by OSCER, 12 owned by researchers).
    • 22 A100 GPU cards have been ordered (8 owned by OSCER, 14 owned by researchers), for a total of 40 A100 GPU cards (14 owned by OSCER, 26 owned by researchers)
  • Large Memory Nodes
    • PowerEdge R930
    • 1 × quad Haswell E7-4809v3 8-core 2.0 GHz, 1 TB RAM (OSCER)
  • Storage
    • High performance parallel filesystem, global user-accessible: DataDirect Networks Exascaler SFX7700X, 70 SATA 6 TB disk drives, ~309 TB useable
    • Lower performance servers full of disk drives, global user-accessibe: ~150 TB useable
  • Networks
    • Infiniband: Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps)
      NOTE: 76 compute nodes don’t have Infiniband, at the owner’s discretion.
    • Ethernet: Gigabit Ethernet (GigE) to each compute node, uplinked to a top-of-rack GigE switch, and each GigE switch uplinked at 2 × 10 Gbps Ethernet (10GE) to a pair of 10GE core switches.
  • Operating system
    • CentOS 8
    • Batch scheduler is SLURM
    • Compiler families include Intel, Portland Group (now part of NVIDIA) and GNU, as well as the NAG Fortran compiler.
  • Schooner is connected to Internet2 and to Internet2’s 100 Gbps national research backbone (Advanced Layer 2 Services)

Interested In Using Schooner?

  • Request an OSCER account (new OSCER users only)
  • Contact us at support@oscer.ou.edu for an initial consult, or, if you have questions regarding your specific use of our HPC systems
  • Check out the help pages in our Support section for detailed information and tutorials