Skip Navigation

Supercomputer

Skip Side Navigation

High Performance Computing (Schooner)

Table Of Contents

Highlights

  • 919 compute nodes
  • ~1.77 PFLOPS (quadrillions of calculations per second)
  • 29,428 CPU cores
  • ~93 TB RAM
  • 500+ TB globally accessible public user storage
  • Two networks (Infiniband and Ethernet)
  • 89 NVIDIA GPUs (total machine learning performance equivalent to 63.5 H100 80 GB PCIe GPUs)

 

Detailed Hardware Specifications:

OSCER Supercomputer Compute Nodes (Current)

About half of these nodes are for general use by everyone, and the other half are "condominium" nodes owned by specific research groups.

QtyCPUsCoresRAM (GB) QtyCPUsCoresRAM (GB)

1

dual Intel Xeon Ice Lake Platinum 8352S

2 x 32

512

 

 

 

 

 

85

dual Intel Xeon Ice Lake Gold 6338

2 x 32

128

 

22

dual Intel Xeon Ice Lake Gold 6338

2 x 32

256

15

dual Intel Xeon Ice Lake Gold 6338

2 x 32

512

 

2

dual Intel Xeon Ice Lake Gold 6338

2 x 32

2048

8

dual Intel Xeon Ice Lake Gold 6330

2 x 28

128

 

1

dual Intel Xeon Ice Lake Gold 6330

2 x 28

256

2

dual AMD EPYC Milan 7543

2 x 32

512

 

1

dual AMD EPYC Milan 7543

2 x 32

1024

2

dual AMD EPYC Milan 7513

2 x 32

512

 

1

dual AMD EPYC Milan 7513

2 x 32

1024

3

dual AMD EPYC Rome 7542

2 x 32

256

 

 

 

 

 

12

dual AMD EPYC Rome 7452

2 x 32

128

 

38

dual AMD EPYC Rome 7452

2 x 32

256

2

dual AMD EPYC Rome 7452

2 x 32

512

 

1

dual AMD EPYC Rome 7352

2 x 24

1024

1

dual Intel Xeon Cascade Lake Gold 6230R

2 x 26

96

 

1

quad Intel Xeon Cascade Lake Gold 6230

4 x 20

1536

41

dual Intel Xeon Cascade Lake Gold 6230

2 x 20

96

 

27

dual Intel Xeon Cascade Lake Gold 6230

2 x 20

192

1

dual Intel Xeon Skylake Gold 6152

2 x 22

384

 

12

dual Intel Xeon Skylake Gold 6140

2 x 18

96

5

dual Intel Xeon Skylake Gold 6132

2 x 14

96

 

 

 

 

 

1

quad Intel Xeon Broadwell E7-4830 v4

4 x 14

2048

 

 

 

 

 

1

dual Intel Xeon Broadwell E5-2650 v4

2 x 12

32

 

28

dual Intel Xeon Broadwell E5-2650 v4

2 x 12

64

1

quad Intel Xeon Haswell E7-4809 v3

4 x 08

3072

 

1

quad Intel Xeon Haswell E7-4809 v3

4 x 08

1024

142

dual Intel Xeon Haswell E5-2670 v3

2 x 12

64

 

5

dual Intel Xeon Haswell E5-2670 v3

2 x 12

128

6

dual Intel Xeon Haswell E5-2650L v3

2 x 12

64

 

72

dual Intel Xeon Haswell E5-2660 v3

2 x 10

32

285

dual Intel Xeon Haswell E5-2650 v3

2 x 10

32

 

1

dual Intel Xeon Haswell E5-2650 v3

2 x 10

96

7

dual Intel Xeon Haswell E5-2640 v3

2 x 08

96

 

6

dual Intel Xeon Haswell E5-2630 v3

2 x 08

128

5

dual Intel Xeon Phi Knights Landing 7210

2 x 64

48

 

3

dual Intel Xeon Phi Knights Landing 7230

2 x 64

48

56

dual Sandy Bridge E5-2650

2 x 08

32

 

15

dual Sandy Bridge E5-2650

2 x 08

64

OSCER Supercomputer Compute Nodes (Purchased, to be deployed in 2024)

QtyCPUsRAM (GB)QtyCPUsRAM (GB)

3

dual Intel Xeon Sapphire Rapids 6430 with
2x RTX 6000 Ada

256 GB

8

dual Intel Xeon Sapphire Rapids 6430 with
2x H100 80GB

512 GB

3

dual Intel Xeon Sapphire Rapids 4410Y with
2x RTX 6000 Ada

256 GB

 

Additional nodes will be ordered in late Jan 2024.

 

  • Accelerators (Graphics Processing Units)
    • Already Delivered: a subtotal of 59 GPUs (14 owned by OSCER, 45 owned by researchers), specifically:
      • 8 H100 80GB GPU cards (owned by researchers)
      • 22 A100 80 GB GPUs (8 owned by OSCER, 14 owned by researchers), 27 A100 40 GB GPUs (6 owned by OSCER, 21 owned by researchers), for a total of 49 A100 GPU cards (14 owned by OSCER, 35 owned by researchers)
      • 2 NVIDIA V100 GPU cards (owned by a researcher)
    • Purchased and planned: a subtotal of 30 GPUs (18 owned by OSCER, 12 owned by researchers), specifically:
      • 16 H100 80 GB GPUs (12 owned by OSCER, 4 owned by researchers)
      • 14 NVIDIA RTX 6000 Ada Generation 48 GB GPUs (6 owned by OSCER, 8 owned by researchers)

 

OSCER has been awarded the following grant:

National Science Foundation grant # OAC-2201561

"CC* Compute: OneOklahoma Cyberinfrastructure Initiative Research Accelerator for Machine Learning (OneOCII-RAML)"

We anticipate that this grant will fund 15 - 25 NVIDIA H100 GPU cards, plus the servers they reside in.

  • Storage
    • High performance parallel filesystem, global user-accessible: DataDirect Networks Exascaler SFX7700X, 70 SATA 6 TB disk drives, ~309 TB useable
    • Lower performance servers full of disk drives, global user-accessibe: ~150 TB useable
  • Networks
    • Infiniband: Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps)
      NOTE: 76 compute nodes don’t have Infiniband, at the owner’s discretion.
    • Ethernet: Gigabit Ethernet (GigE) to each compute node, uplinked to a top-of-rack GigE switch, and each GigE switch uplinked at 2 × 10 Gbps Ethernet (10GE) to a pair of 10GE core switches.
  • Operating system
    • CentOS 8
    • Batch scheduler is SLURM
    • Compiler families include Intel, Portland Group (now part of NVIDIA) and GNU, as well as the NAG Fortran compiler.
  • Schooner is connected to Internet2 and to Internet2’s 100 Gbps national research backbone (Advanced Layer 2 Services)
     

Interested In Using Schooner?

  • Request an OSCER account (new OSCER users only)
  • Contact us at support@oscer.ou.edu for an initial consult, or, if you have questions regarding your specific use of our HPC systems
  • Check out the help pages in our Support section for detailed information and tutorials
     

Purchasing "Condominium" Compute Nodes for Your Research Team

Under OSCER's "condominium" compute node plan, you can purchase one or more compute node(s) of your own, at any time, to have added to OSCER's supercomputer.

(We use the term "condominium" as an analogy to a condominium apartment complex, where some company owns the complex, but each resident owns their own apartment.)

NOTE: If you're at an institution other than OU, we CANNOT guarantee to offer you the condominium option, or, if we can, there might be additional charges.
 

How to Purchase Condominium Compute Nodes

You MUST work with OSCER to get the quote(s) for any condomium compute node purchase(s), because your condominium compute node(s) MUST be compatible with the rest of OSCER's supercomputer, and MUST be shipped to the correct address.

You can buy any number of condominium compute nodes at any time, with OSCER's help:

support@oscer.ou.edu

OSCER will work with you on the details of the hardware configuration, and to get a formal quote from our current vendor, Dell.

OSCER offers a variety of CPU options within a few Intel and AMD x86 CPU families, and a variety of RAM capacities.

See Condominium Compute Node Options, below.

You have to buy the compute node (server computer) itself, plus a few network cables.
 

Who Can Use Your Condominium Compute Node(s)?

Once you execute a purchase and your condominium compute node(s) arrive and get put into production, you get to decide who can run on them, typically by OSCER creating one or more batch queues for you.

For example, it could be just your research team (or even a subset of your team), or your team and one or more other team(s) that you designate, etc.

No Additional Charges Beyond Hardware Purchase

You pay for your condominium compute node hardware, including cables. There is NO ADDITIONAL CHARGE beyond purchasing the compute node hardware.

OSCER deploys your condominium compute node(s) and maintains them as part of OSCER's supercomputer, at NO ADDITIONAL CHARGE.

How Long Will a Condominium Compute Node Stay in Production?

A condominium compute node will stay in production for the lifetime of the supercomputer you buy it for, PLUS the lifetime of the immediate subsequent supercomputer.

Currently, that means OSCER's emerging new supercomputer, Sooner, plus its immediate successor, Boomer.

So, probably 6 to 8 years total, give or take.

NOTE: Once your initial extended warranty expires, either

(a) you can buy annual support year by year for your condominium compute node(s),

OR

(b) you can buy replacement components when any components in your condominium compute node(s) fail,

OR

(c) OSCER will let your condominium compute node(s) die when they die.

Condominium Compute Node Options

(1) Condominium Compute Node

(1a) R650, Intel Xeon Ice Lake CPUs, DDR4-3200 RAM
— Several CPU options (6338 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1b) R6525, AMD EPYC Rome or Milan CPUs, DDR4-3200 RAM
— Several CPU options (7513 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1c) R660, Intel Xeon Sapphire Rapids CPUs, DDR5-4800 RAM
— Several CPU options (6430 32-core recommended)
— 256 GB or 512 GB RAM
— Common configuration (below)

(1d) R6525, AMD EPYC Genoa CPUs, DDR5-4800 RAM
— Several CPU options (9454 48-core recommended)
— 384 GB or 768 GB RAM
— Common configuration (below)

Common Configuration
— Disk: single small drive for operating system and local /lscratch
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, management: Gigabit Ethernet 2-port w/1 cable
— Power supply: single non-redundant
— Warranty: Basic hardware replacement, 5 years recommended

(2) Condominium Large RAM node

(2a) R650, configured like (1a), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2b) R6525, configured like (1b), above, EXCEPT:
— 1 TB or 2 TB or 4 TB RAM
— Common configuration (below)

(2c) R660, configured like (1c), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2d) R6625, configured like (1d), above, EXCEPT:
— 1.5 TB or 3 TB or 6 TB RAM
— Common configuration (below)

Common configuration
— Disk: dual disk drives mirrored (RAID1)
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, Ethernet: 25GE 2-port w/cables
— Network, management: GigE 2-port w/cables
— Power supplies: dual redundant
— Warranty: Basic hardware replacement, 5 years recommended

(3) Condominium Quad CPU Node

R860, configured like (2c), above, EXCEPT:
— 4 CPU chips (6430H 32-core recommended)
— 1 TB or 2 TB or 4 TB or 8 TB or 16 TB RAM

(4) Condominium GPU node

NVIDIA A100 and H100 GPU cards now have a delivery time of approximately a year, so OSCER currently DOESN'T recommend buying them.

Instead, please consider the following options:

(4a) Dual NVIDIA RTX 6000 Ada Generation WITHOUT NVlink
Precision 7960 rackmount workstation, configured like (2c), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA RTX 6000 Ada Generation GPUs (48 GB)

(4b) Dual NVIDIA L40 WITHOUT NVlink
R7525 with dual NVIDIA L40 WITHOUT NVlink, configured like (2b), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40 GPUs (48 GB)

(4c) Dual NVIDIA L40S WITHOUT NVlink
R750xa with dual NVIDIA L40S WITHOUT NVlink, configured like (2a), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40S GPUs (48 GB)

(4a) Dual NVIDIA A100 WITH NVlink (600 GB/sec GPU-to-GPU)
R750xa, configured like (2a), above, EXCEPT:
— 512 GB RAM (for A100 80 GB) or 256 GB RAM (for A100 40 GB)
— dual NVIDIA A100 GPUs (80 GB or 40 GB)
— NVlink (600 GB/sec GPU-to-GPU)

(4b) Dual NVIDIA A100 WITHOUT NVlink
R7525, configured like (2b), above, EXCEPT:
— 512 GB RAM (for A100 80 GB) or 256 GB RAM (for A100 40 GB)
— dual NVIDIA A100 GPUs (80 GB or 40 GB)

(4c) Dual NVIDIA H100 WITH NVlink (900 GB/sec GPU-to-GPU)
R760xa, configured like (2c), above, EXCEPT:
— 512 GB RAM
— dual NVIDIA H100 GPUs (80 GB)
— NVlink (900 GB/sec GPU-to-GPU)

(4d) Dual NVIDIA H100 WITHOUT NVlink
R760, configured like (2c), above, EXCEPT:
— 512 GB RAM
— dual NVIDIA H100 GPUs (80 GB)


How to Buy Condominium Compute Node(s)

You can buy any number of condominium compute nodes at any time, with OSCER's help.
Please contact OSCER at:

support@oscer.ou.edu