High Performance Computing (Schooner)
Highlights
- 766 compute nodes
- ~902.5 TFLOPS (trillions of calculations per second)
- 18,408 CPU cores
- ~57 TB RAM
- ~425 TB usable storage
- Two networks (Infiniband and Ethernet)
Detailed Hardware Specifications:
OSCER Supercomputer Compute Nodes (Current)
Qty | CPUs | RAM (GB) | Qty | CPUs | RAM (GB) |
---|---|---|---|---|---|
285 | dual Intel Xeon Haswell E5-2650 v3 | 32 | 142 | dual Intel Xeon Haswell E5-2670 v3 | 64 |
1 | dual Intel Xeon Haswell E5-2650 v3 | 96 | 5 | dual Intel Xeon Haswell E5-2670 v3 | 128 |
72 | dual Intel Xeon Haswell E5-2660 v3 | 32 | 28 | dual Intel Xeon Broadwell E5-2650 v4 | 64 |
6 | dual Intel Xeon Haswell E5-2650L v3 | 64 | 1 | dual Intel Xeon Broadwell E5-2650 v4 | 32 |
6 | dual Intel Xeon Haswell E5-2630 v3 | 128 | 7 | dual Intel Xeon Haswell E5-2640 v3 | 96 |
12 | dual Intel Xeon Skylake Gold 6140 | 96 | 1 | dual Intel Xeon Skylake Gold 6152 | 384 |
6 | dual Intel Xeon Cascade Lake Gold 6230R | 96 | 5 | dual Intel Xeon Skylake Gold 6132 | 96 |
30 | dual Intel Xeon Cascade Lake Gold 6230 | 96 | 24 | dual Intel Xeon Cascade Lake Gold 6230 | 192 |
3 | dual Intel Xeon Ice Lake Gold 6330 | 128 | 1 | dual Intel Xeon Ice Lake Gold 6330 | 256 |
32 | dual Intel Xeon Ice Lake Gold 6338 | 128 | |||
1 | quad Intel Xeon Haswell E7-4809 v3 | 3072 | 1 | quad Intel Xeon Haswell E7-4809 v3 | 1024 |
1 | quad Intel Xeon Haswell E7-4830 v4 | 2048 | 1 | quad Intel Xeon Cascade Lake 6230 | 1536 |
18 | dual AMD EPYC Rome 7452 | 256 | 1 | dual Intel Xeon Ice Lake 8352S | 2048 |
2 | dual AMD EPYC Milan 7543 | 512 | 1 | dual AMD EPYC Milan 7543 | 1024 |
56 | dual Sandy Bridge E5-2650 | 32 | 15 | dual Sandy Bridge E5-2650 | 64 |
5 | dual Intel Xeon Phi Knights Landing 7210 | 48 | 3 | dual Intel Xeon Phi Knights Landing 7230 | 48 |
- Additional capacity has been purchased and will soon be deployed: 34 nodes, 2160 cores, 8.5 TB RAM and 151.9 TFLOPs peak, for a total of 800 compute nodes, 20,568 CPU cores, ~66 TB RAM and 1054.4 TFLOPs peak (that is, just over 1 PFLOPs).
OSCER Supercomputer Compute Nodes (Purchased, to be deployed in 2022)
Qty | CPUs | RAM (GB) | Qty | CPUs | RAM (GB) |
---|---|---|---|---|---|
15 | dual Intel Xeon Ice Lake 6338 | 128 | 5 | dual Intel Xeon Ice Lake 6338 | 256 |
4 | dual Intel Xeon Ice Lake 6338 | 512 | |||
3 | dual AMD EPYC Rome 7452 | 128 | 4 | dual AMD EPYC Rome 7452 | 256 |
2 | dual AMD EPYC Milan 7513 | 512 | 1 | dual AMD EPYC Rome 7352 | 1024 |
- Accelerator-capable Compute nodes
- PowerEdge R730
- 6 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual NVIDIA K20M accelerator cards (3 OSCER, 3 condominium)
- 12 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, dual Intel Xeon Phi MIC 31S1P accelerator cards (all OSCER)
- 13 × dual Haswell E5-2650v3 10-core 2.3 GHz, 32 GB RAM, no accelerator cards (all OSCER)
- 5 × dual Haswell E5-2670v3 12-core 2.3 GHz, 128 GB RAM, no accelerator cards (all OSCER)
- 2 NVIDIA V100 GPU cards (owned by a researcher).
- 18 NVIDIA A100 GPU cards (6 owned by OSCER, 12 owned by researchers).
- 22 A100 GPU cards have been ordered (8 owned by OSCER, 14 owned by researchers), for a total of 40 A100 GPU cards (14 owned by OSCER, 26 owned by researchers)
- PowerEdge R730
- Large Memory Nodes
- PowerEdge R930
- 1 × quad Haswell E7-4809v3 8-core 2.0 GHz, 1 TB RAM (OSCER)
- PowerEdge R930
- Storage
- High performance parallel filesystem, global user-accessible: DataDirect Networks Exascaler SFX7700X, 70 SATA 6 TB disk drives, ~309 TB useable
- Lower performance servers full of disk drives, global user-accessibe: ~150 TB useable
- Networks
- Infiniband: Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps)
NOTE: 76 compute nodes don’t have Infiniband, at the owner’s discretion. - Ethernet: Gigabit Ethernet (GigE) to each compute node, uplinked to a top-of-rack GigE switch, and each GigE switch uplinked at 2 × 10 Gbps Ethernet (10GE) to a pair of 10GE core switches.
- Infiniband: Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps)
- Operating system
- Schooner is connected to Internet2 and to Internet2’s 100 Gbps national research backbone (Advanced Layer 2 Services)
Interested In Using Schooner?
- Request an OSCER account (new OSCER users only)
- Contact us at support@oscer.ou.edu for an initial consult, or, if you have questions regarding your specific use of our HPC systems
- Check out the help pages in our Support section for detailed information and tutorials