Difference: SDFlex (r1 vs. r2)
Large shared-memory node - HPE Superdome Flex
- Hostname: taurussmp8
- Access to all shared fiel systems
- SLURM partition
julia
- 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
- 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
- 370 TB of fast NVME storage available at
/nvme/
Hints for usage
- granularity should be a socket (28 cores)
- can be used for OpenMP applications with large memory demands
- OpenMPI does not work properly here. For MPI applications: use IntelMPI !
- Use
I_MPI_FABRICS=shm
so that Intel MPI doesn't even consider using InfiniBand devices itself, but only shared-memory instead
Open for Testing
- At the moment we have set a quota of 100 GB per project on this NVMe storage. As soon as the first projects come up with proposals how this unique system (large shared memory + NVMe storage) can speed up their computations, we will gladly increase this limit, for selected projects.
- Test users might have to clean-up their /nvme storage within 4 weeks to make room for large projects.