Skip Navigation

Supercomputer

Skip Side Navigation

High Performance Computing (Supercomputer)

Table Of Contents

Highlights

  • 850+ compute nodes
  • ~2.1 PFLOPS (quadrillions of calculations per second)
  • 29,700 CPU cores
  • ~97 TB RAM
  • 1+ PB globally accessible public user storage
  • Two networks (Infiniband and Ethernet)
  • 83 GPU cards
     

Supercomputer Information

Interested In Using OSCER's Supercomputer?

  • Request an OSCER account (new OSCER users only)
  • Contact us at support@oscer.ou.edu for an initial consult, or, if you have questions regarding your specific use of our HPC systems
  • Check out the help pages in our Support section for detailed information and tutorials
     

Detailed Hardware Specifications:

OSCER Supercomputer Compute Nodes (Current)

About half of these nodes are for general use by everyone, and the other half are "condominium" nodes owned by specific research groups.

QtyCPUsCoresRAM (GB) QtyCPUsCoresRAM (GB)
2dual Intel Xeon Sapphire Rapids 6438Y+
 
2 x 32512 4dual Intel Xeon Sapphire Rapids 6430
 
2 x 32256
4dual Intel Xeon Sapphire Rapids 4410Y2 x 12256 

1

dual Intel Xeon Ice Lake Platinum 8352S

2 x 32

512

86

dual Intel Xeon Ice Lake Gold 6338

2 x 32

128

 

29

dual Intel Xeon Ice Lake Gold 6338

2 x 32

256

16

dual Intel Xeon Ice Lake Gold 6338

2 x 32

512

 

2

dual Intel Xeon Ice Lake Gold 6338

2 x 32

2048

12

dual Intel Xeon Ice Lake Gold 6330

2 x 28

128

 

1

dual Intel Xeon Ice Lake Gold 6330

2 x 28

256

2

dual AMD EPYC Milan 7543

2 x 32

512

 

1

dual AMD EPYC Milan 7543

2 x 32

1024

4

dual AMD EPYC Milan 7513

2 x 32

512

 

1

dual AMD EPYC Milan 7513

2 x 32

1024

7

dual AMD EPYC Milan 75132 x 32

128

 

3

dual AMD EPYC Rome 7542

2 x 32

256

12

dual AMD EPYC Rome 7452

2 x 32

128

 

39

dual AMD EPYC Rome 7452

2 x 32

256

2

dual AMD EPYC Rome 7452

2 x 32

512

 

1

dual AMD EPYC Rome 7352

2 x 24

1024

1

dual Intel Xeon Cascade Lake Gold 6230R

2 x 26

96

 

1

quad Intel Xeon Cascade Lake Gold 6230

4 x 20

1536

41

dual Intel Xeon Cascade Lake Gold 6230

2 x 20

96

 

27

dual Intel Xeon Cascade Lake Gold 6230

2 x 20

192

1

dual Intel Xeon Skylake Gold 6152

2 x 22

384

 

12

dual Intel Xeon Skylake Gold 6140

2 x 18

96

5

dual Intel Xeon Skylake Gold 6132

2 x 14

96

 

 

 

 

 

1

quad Intel Xeon Broadwell E7-4830 v4

4 x 14

2048

 

 

 

 

 

1

dual Intel Xeon Broadwell E5-2650 v4

2 x 12

32

 

28

dual Intel Xeon Broadwell E5-2650 v4

2 x 12

64

1

quad Intel Xeon Haswell E7-4809 v3

4 x 08

3072

 

1

quad Intel Xeon Haswell E7-4809 v3

4 x 08

1024

142

dual Intel Xeon Haswell E5-2670 v3

2 x 12

64

 

5

dual Intel Xeon Haswell E5-2670 v3

2 x 12

128

6

dual Intel Xeon Haswell E5-2650L v3

2 x 12

64

 

72

dual Intel Xeon Haswell E5-2660 v3

2 x 10

32

285

dual Intel Xeon Haswell E5-2650 v3

2 x 10

32

 

1

dual Intel Xeon Haswell E5-2650 v3

2 x 10

96

7

dual Intel Xeon Haswell E5-2640 v3

2 x 08

96

 

6

dual Intel Xeon Haswell E5-2630 v3

2 x 08

128

OSCER Supercomputer Compute Nodes (Purchased, to be deployed in 2024)

QtyCPUsCoresRAM (GB) QtyCPUsCoresRAM (GB)

2

dual Intel Xeon Sapphire Rapids 6348Y+2 x 32

1024

 

6

dual Intel Xeon Ice Lake Gold 6338

2 x 32

512

OSCER Supercomputer Graphics Processing Units (GPUs)

ModelQuantity
In Production

Quantity Coming

Memory (GB)

Interface

GB/sec
GPU-to-GPU
Total83346804  

V100

2

 

32

PCIe 3

51.75

RTX 6000 Ada

16

 

48

PCIe 4

31.5

L40S

4

24

48

PCIe 4

31.5

A100

11

 

40

PCIe 4

31.5

A100

16

 

40

SXM

600

A100

18

 

80

PCIe 4, NVlink

600

A100

4

 

80

SXM

600

H100

12

 

80

PCIe 5

63

H100

 

8

80

SXM

900

H100 NVL

 

2

94

PCIe 5, NVlink

600

OSCER has been awarded the following grant:

National Science Foundation grant # OAC-2201561

"CC* Compute: OneOklahoma Cyberinfrastructure Initiative Research Accelerator for Machine Learning (OneOCII-RAML)"

We anticipate that this grant will fund 10 NVIDIA H100 GPU cards, plus the servers they reside in.

  • Storage
    • High performance parallel filesystem, globally user-accessible: Ceph, ~530 TB usable
    • Lower performance servers full of disk drives, global user-accessible: ~352 TB useable
  • Networks
    • Infiniband: for Haswell and Broadwell nodes, Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps); HDR100 100 Gbps, 4:1 oversubscribed (25 Gbps) for other nodes; EDR 100 Gbps super-core linking FDR10 to HDR100
      NOTE: Some compute nodes don’t have Infiniband, at the owner’s discretion.
    • Ethernet:
      • 4 x 100 Gbps Ethernet (100GE) top core switches
      • 4 x 25GE core switches, uplinked to the 100GE top core switches
      • Gigabit Ethernet (GigE) to each compute node for management, uplinked to a top-of-rack GigE switch, and each GigE switch uplinked at 2 × 10 Gbps Ethernet (10GE) to the 25GE core switches
  • Operating system
    • Enterprise Linux 9 (EL9, mostly Rocky Linux 9)
      Some compute nodes are still on EL7, but the transition to EL9 is in process.
    • Batch scheduler is SLURM
    • Compiler families include Intel, Portland Group (now part of NVIDIA) and GNU, as well as the NAG Fortran compiler.
  • Schooner is connected to Internet2 and to Internet2’s 100 Gbps national research backbone (Advanced Layer 2 Services)
     

Purchasing "Condominium" Compute Nodes for Your Research Team

Under OSCER's "condominium" compute node plan, you can purchase one or more compute node(s) of your own, at any time, to have added to OSCER's supercomputer.

(We use the term "condominium" as an analogy to a condominium apartment complex, where some company owns the complex, but each resident owns their own apartment.)

NOTE: If you're at an institution other than OU, we CANNOT guarantee to offer you the condominium option, or, if we can, there might be additional charges.
 

How to Purchase Condominium Compute Nodes

You MUST work with OSCER to get the quote(s) for any condomium compute node purchase(s), because your condominium compute node(s) MUST be compatible with the rest of OSCER's supercomputer, and MUST be shipped to the correct address.

You can buy any number of condominium compute nodes at any time, with OSCER's help:

support@oscer.ou.edu

OSCER will work with you on the details of the hardware configuration, and to get a formal quote from our current vendor, Dell.

OSCER offers a variety of CPU options within a few Intel and AMD x86 CPU families, and a variety of RAM capacities.

See Condominium Compute Node Options, below.

You have to buy the compute node (server computer) itself, plus a few network cables.
 

Who Can Use Your Condominium Compute Node(s)?

Once you execute a purchase and your condominium compute node(s) arrive and get put into production, you get to decide who can run on them, typically by OSCER creating one or more batch queues for you.

For example, it could be just your research team (or even a subset of your team), or your team and one or more other team(s) that you designate, etc.

No Additional Charges Beyond Hardware Purchase

You pay for your condominium compute node hardware, including cables. There is NO ADDITIONAL CHARGE beyond purchasing the compute node hardware.

OSCER deploys your condominium compute node(s) and maintains them as part of OSCER's supercomputer, at NO ADDITIONAL CHARGE.

How Long Will a Condominium Compute Node Stay in Production?

A condominium compute node will stay in production for the lifetime of the supercomputer you buy it for, PLUS the lifetime of the immediate subsequent supercomputer.

Currently, that means OSCER's emerging new supercomputer, Sooner, plus its immediate successor, Boomer.

So, probably 6 to 8 years total, give or take.

NOTE: Once your initial extended warranty expires, either

(a) you can buy annual support year by year for your condominium compute node(s),

OR

(b) you can buy replacement components when any components in your condominium compute node(s) fail,

OR

(c) OSCER will let your condominium compute node(s) die when they die.

Condominium Compute Node Options

(1) Condominium Compute Node

(1a) R650, Intel Xeon Ice Lake CPUs, DDR4-3200 RAM
— Several CPU options (6338 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1b) R6525, AMD EPYC Rome or Milan CPUs, DDR4-3200 RAM
— Several CPU options (7513 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1c) R660, Intel Xeon Sapphire Rapids CPUs, DDR5-4800 RAM
— Several CPU options (6430 32-core recommended)
— 256 GB or 512 GB RAM
— Common configuration (below)

(1d) R6525, AMD EPYC Genoa CPUs, DDR5-4800 RAM
— Several CPU options (9454 48-core recommended)
— 384 GB or 768 GB RAM
— Common configuration (below)

Common Configuration
— Disk: single small drive for operating system and local /lscratch
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, management: Gigabit Ethernet 2-port w/1 cable
— Power supply: single non-redundant
— Warranty: Basic hardware replacement, 5 years recommended

(2) Condominium Large RAM node

(2a) R650, configured like (1a), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2b) R6525, configured like (1b), above, EXCEPT:
— 1 TB or 2 TB or 4 TB RAM
— Common configuration (below)

(2c) R660, configured like (1c), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2d) R6625, configured like (1d), above, EXCEPT:
— 1.5 TB or 3 TB or 6 TB RAM
— Common configuration (below)

Common configuration
— Disk: dual disk drives mirrored (RAID1)
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, Ethernet: 25GE 2-port w/cables
— Network, management: GigE 2-port w/cables
— Power supplies: dual redundant
— Warranty: Basic hardware replacement, 5 years recommended

(3) Condominium Quad CPU Node

R860, configured like (2c), above, EXCEPT:
— 4 CPU chips (6430H 32-core recommended)
— 1 TB or 2 TB or 4 TB or 8 TB or 16 TB RAM

(4) Condominium GPU node

(4a) Dual NVIDIA H100 NVL WITH NVlink (600 GB/sec GPU-to-GPU)
R760xa, configured like (2c), above, EXCEPT:
— 512 GB RAM
— dual NVIDIA H100 NVL GPUs (94 GB)
— NVlink (600 GB/sec GPU-to-GPU)

(4b) Dual NVIDIA L40S WITHOUT NVlink
R750xa with dual NVIDIA L40S WITHOUT NVlink, configured like (2a), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40S GPUs (48 GB)

(4c) Dual NVIDIA L40 WITHOUT NVlink
R7525 with dual NVIDIA L40 WITHOUT NVlink, configured like (2b), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40 GPUs (48 GB)

(4d) Dual NVIDIA RTX 6000 Ada Generation WITHOUT NVlink
Precision 7960 rackmount workstation, configured like (2c), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA RTX 6000 Ada Generation GPUs (48 GB)

 

How to Buy Condominium Compute Node(s)

You can buy any number of condominium compute nodes at any time, with OSCER's help.
Please contact OSCER at:

support@oscer.ou.edu