HPC2 - Legacy High Performance Community Computing Cluster
HPC2 is implemented as a strictly a "condo-style" cluster and is build from the nodes that researches have bought previously for HPC and the hardware was modern enough to accept CentOS 7 operating system. There is no expansion plan for this cluster. Only users who had an access to the private queues on HPC have an access to this cluster.
You must either be on the campus network or connected to the UCI campus VPN and use ssh to login.
For example a user with UCINetID panteater can use:
3. HPC2 similarities
We have attempted to make HPC2 as close to HPC3 as possible:
4. HPC2 differences
Home areas are different from HPC3.
Your Slurm account named <PI>_lab is filled with 100% of the hours that your hardware can deliver in the next year. +
You cannot run out of hours.
All jobs are accounted, no job can be killed or preempted by another job.
There is no default partition. Every owner has its hardware in a named queue/partition. You are only enabled on that queue so that granted cycles are strictly by the physical nodes that a lab owns.
Users need to include both the account and the partition info in their submit scripts. As an example, users in Panteater’s lab would add:
#SBATCH -A panteater_lab #SBATCH -p panteater_lab
GPU usage: there are two labs with GPU-enabled nodes. Your batch jobs need to explicitly request the GPUs - this will keep your users from "stepping on each other" when trying to use GPUs.