3. Specs

3.1. Hardware configuration

HPC3 had an initial procurement phase through an RFP process. After evaluation, Hewlett Packard Enterprise was awarded the bid. Since award, additional purchases have been made to bring the cluster to its current configuration.

The system started as a 4000 core system when first constructed in June 2020. It has expanded several times with nodes purchased by UCI and faculty.

As of March 2025, the following describes the cluster:
  • 253 Batch-accessible nodes including: * 14 nodes with 4 Nvidia V100 (16GB) GPUs * 18 nodes with 4 Nvidia A30 (24GB) GPUs * 4 nodes with 2 Nvidia A100 (80GB) GPUs * 2 nodes with 4 Nvidia L40S (48GB) GPUs

  • 11568 total cores (1256 AMD EPYC and 10312 Intel)

  • 73,132 GB Aggregate Memory

  • Three load-balanced login nodes

  • 96.4% nodes (244/253) at 100 Gbit/s EDR InfiniBand

HPC3 heterogeneous hardware has the following unique configurations.

Note

Slurm matches your job request to physical nodes. It is possible to make a request where just a few or no physical nodes can fulfill your request. For example, requesting a 800GB of memory on a single node is only possible on 4 nodes.

Note

Features and GPU type and number (or GRES, e.g. Generic RESources) are resource specifications that can be requested in Slurm GPU job submissions.

3.2. Networking

HPC3 has the following networks attached to each node
  • 10 Gbit/s Ethernet, the provisioning and control network to access Ethernet-only resources.

  • 100 Gbit/s ConnectX-5 EDR InfiniBand

See more info in Network type.

3.3. Node Type

HPC3 nodes have
  • minimums of 56 Gbit/s InfiniBand (most nodes are 100 Gbit/s)

  • 4GB memory/core

  • AVX-2 capability

For additional info see Hardware FAQ.

3.3.1. CPU only nodes

Most-common configurations:

Chassis:
  1. HP
    HPE Apollo 2000 Gen 10. 2RU with 4 nodes/chassis
    Dual-Socket, Intel Skylake 6148 20-core CPU@2.4GHz. 40 Cores total.
  2. Dell Cascade Lake
    Dual-Socket, Intel Cascade Lake 6240R 24-core CPU@2.4GHz. 48 Cores total.
  3. Dell Ice Lake
    Dual-Socket, Intel Ice Lake 6336Y 24-core CPU@2.4GHz. 48 Cores total.
    256GB DDR4, ECC Memory
Interconnect:

Each node is connected to Ethernet and InfiniBand networks. See Networking for details.

Memory:
All memory is DDR4, EEC, most common capacity is 192GB.
Available memory in GB:

192

256

384

512

768

1536

2048

3072

3.3.2. GPU-Enabled Nodes

A node can have up to 4 GPUs of the same type. CPU, Network, Memory, SSD are identical to CPU only nodes. Currently available configurations have high-bandwidth memory and PCIe connections.

Chassis:
HPE DL380 Gen 10 chassis, 2RU, up to 4 GPUs/chassis.
GPU:
Qty 4 Nvidia V100 GPU, 16GB memory
Qty 4 Nvidia A30 GPU, 24GB memory
Qty 2 Nvidia A100 GPU, 80GB memory
Qty 4 Nvidia L40S GPU, 48 memory

3.3.3. Support Nodes

Support nodes are specialized nodes that provide very specific services:

Type

How many

Provided Services

Login nodes

3

Point of entry to the cluster. Have the same CPU, Network, Memory configuration as CPU nodes.

Slurm server

1

Slurm scheduler

Provisioning

1

Management node

Firewall

4

PFSense security

NFS server

3

Home area with ZFS as the underlying file system

3.4. Node Details

HPC3 is a heterogeneous cluster with several CPU types, memory footprints, InfiniBand revisions. All nodes in HPC3 have the following minimum requirements:

AVX support:

AVX2 (most nodes have avx512 support)

Cores/node:

24 (most nodes have at least 40)

Memory/core:

4GB

IB Technology:

FDR (Fourteen Data Rate)

Ganglia provides real time high-level view of HPC3 utilization. You must be on the UCI/VPN Network for this link to work.

You may download node details info as the CVS file or browse the table below. Click on the column header for sorting.