ALERT! Warning: your browser isn't supported. Please install a modern one, like Firefox, Opera, Safari, Chrome or the latest Internet Explorer. Thank you!
Startseite » ... » Zentrale Einrichtungen  » ZIH  » Wiki
phone prefix: +49 351 463.....

HPC Support

Operation Status

Ulf Markwardt: 33640
Claudia Schmidt: 39833

Login and project application

Phone: 40000
Fax: 42328

You are here: Compendium » SystemTaurus » HardwareTaurus » SDFlex

Large shared-memory node - HPE Superdome Flex

  • Hostname: taurussmp8
  • Access to all shared fiel systems
  • SLURM partition julia
  • 32 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz (28 cores)
  • 48 TB RAM (usable: 47 TB - one TB is used for cache coherence protocols)
  • 370 TB of fast NVME storage available at /nvme/<projectname>

Hints for usage

  • granularity should be a socket (28 cores)
  • can be used for OpenMP applications with large memory demands
  • To use OpenMPI it is necessary to export the following environment variables, so that OpenMPI uses shared memory instead of Infiniband for message transport.
    export OMPI_MCA_pml=ob1;   export OMPI_MCA_mtl=^mxm
  • Use I_MPI_FABRICS=shm so that Intel MPI doesn't even consider using InfiniBand devices itself, but only shared-memory instead

Open for Testing

  • At the moment we have set a quota of 100 GB per project on this NVMe storage. As soon as the first projects come up with proposals how this unique system (large shared memory + NVMe storage) can speed up their computations, we will gladly increase this limit, for selected projects.
  • Test users might have to clean-up their /nvme storage within 4 weeks to make room for large projects.