How to use

HPC3 has several storage systems available. Connectivity, File System architecture, and physical hardware all contribute to performance.

HPC3 Storage pictogram

hpc3 storage

Attention

Storage is shared among all users.
The nature of networked-storage makes is possible for
a single user to render a file system unusable for all.

The following summary explains what each storage system provides, what it should be used for, and shows links for in-depth how to use guides.

Home
See details in HOME storage guide.
Provides a convenience access on all nodes via mount over NFS
Slowest performance, yet is sufficient when used properly
Use to keep small source code or compiled binaries
Use for small (order of Mbs) data files
Do not use for data intensive batch jobs
Scratch
See details in Scratch storage guide.
Local disk space unique to each compute node
Fastest performance, data is removed when job completes
Use as scratch storage for batch Jobs that repeatedly access many small files or make frequent small reads/writes:
Not available on login nodes
Parallel
See details in DFS storage guide.
Provides a convenience access on all nodes via mount
Performance is best for processing medium/large data files (order of 100s Mbs/Gbs)
Use for batch jobs, most common place for data used in batch jobs
Use to keep source code, binaries
Do not use for writing/reading many small files
Campus Storage
See details in CRSP storage guide.
Provides a convenience access on all nodes via mount over NFS
Performance is best for processing medium/large data files (order of 100s Mbs/Gbs)
Use sometimes for batch Jobs, usually better to use DFS or local $TMPDIR storage
Use to keep source code, binaries
Do not use for writing/reading many small files
Campus Storage Annex
See details in CRSP ANNEX storage guide.
Provides a convenience access on all nodes via mount over BeeGFS
Performance is best for processing medium/large data files (order of 100s Mbs/Gbs)
Do not use for writing/reading many small files