HPC Status¶
HPC Compute Nodes Migrated to Ceph¶
The Department of Statistics HPC now uses CephFS for shared storage across the compute nodes (see Storage for detail). The following HPC compute nodes have been migrated to Ceph:
Slurm login nodes¶
slurm-hn04 (Can run VSCode Server)
slurm-hn05
Cluster: srf_cpu_01¶
barbary
garganey
grey01
muscovy
saxony (incomplete migration: non-functioning /scratch areas)
swan01
swan02
swan03
swan11
swan12
swan21
swan22
Cluster: srf_gpu_01¶
gaussgpu01
greyostrich
swangpu24
Cluster: swan¶
rainmlgpu01
Access¶
Access through slurm-hn04 login node:
Eg:
$ ssh your-Stats-IT-Account@slurm-hn04
$ srun -p swan02-debug --clusters=srf_cpu_01 --pty --nodes=1 --ntasks-per-node=1 -t 00:30:00 --wait=0 /bin/bash -l
Warning
The following storage areas are being wiped and decommissioned. If you wish to keep any of your data, please ensure you move them to another location (see above). If you need any help with this, please contact Stats IT.
Files on /data/localhost/not-backed-up will now only be accessible through an srun, until the areas are decommissioned and wiped.
* /data/localhost/$USER/
* /data/localhost/not-backed-up/$USER/
* /data/localhost/not-backed-up/scratch/$USER/