ALERT! Warning: your browser isn't supported. Please install a modern one, like Firefox, Opera, Safari, Chrome or the latest Internet Explorer. Thank you!
Startseite » ... » Zentrale Einrichtungen  » ZIH  » Wiki
phone prefix: +49 351 463.....

HPC Support

Operation Status

Ulf Markwardt: 33640
Claudia Schmidt: 39833

Login and project application

Phone: 40000
Fax: 42328

Nanoscale Modeling Tools

  Taurus module
ABINIT 7.2 abinit
CP2K 2.3, 2.4, 2.6, r14075 cp2k
CPMD   -
Gamess US 2013 gamess
Gaussian g03e01, g09, g09b01 gaussian
GROMACS 4.6.5 gromacs
LAMMPS 2014 lammps
NAMD 2.10 namd
Siesta 3.2 siesta
VASP 5.3 vasp


NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

Note that on Taurus, the NAMD installation (ibverbs) does not use MPI anymore but rather uses Infiniband directly. Therefore, you cannot not use srun/mpirun to spawn the processes but have to use the supplied "charmrun" command instead. Also, since this is batch system agnostic, it has no possiblity of knowing which nodes are reserved for it use, so if you want it to run on more than node, you have to create a hostlist file and feed it to charmrun via the parameter "++nodelist". Otherwise, all processes will be launched on the same node (localhost) and the other nodes remain unused.

You can use the following snippet in your batch file to create a hostlist file:
export NODELISTFILE="/tmp/slurm.nodelist.$SLURM_JOB_ID"
for LINE in `scontrol show hostname $SLURM_JOB_NODELIST` ; do
  echo "host $LINE" >> $NODELISTFILE ;

# launch NAMD processes. Note that the environment variable $SLURM_NTASKS is only available if you have
# used the -n|--ntasks parameter. Otherwise, you have to specify the number of processes manually, e.g. +p64
charmrun +p$SLURM_NTASKS ++nodelist $NODELISTFILE $NAMD inputfile.namd

# clean up afterwards:

The current version 2.7b1 of NAMD runs much faster than 2.6. - Especially on the SGI Altix. Since the parallel performance strongly depends on the size of the given problem one cannot give a general advice for the optimum number of CPUs to use. (Please check this by running NAMD with your molecules and just a few time steps.)

Large NAMD jobs can be submitted on the Altix (for multiple partitions) by a command line like
bsub -oo out.txt -eo err.txt -n 256 pamrun namd2  input.namd

Smaller jobs should be kept inside one partition:
bsub -oo out.txt -eo err.txt -n 16 -R span[hosts=1] \
     mpirun -np 16 namd2  input.namd

On Deimos:

bsub -a openmpi -n 16 mpirun.lsf namd2  input.namd

Any published work which utilizes NAMD shall include the following reference:
James C. Phillips, Rosemary Braun, Wei Wang, James Gumbart, Emad Tajkhorshid, Elizabeth Villa, Christophe Chipot, Robert D. Skeel, Laxmikant Kale, and Klaus Schulten. Scalable molecular dynamics with NAMD. Journal of Computational Chemistry, 26:1781-1802, 2005.

Electronic documents will include a direct link to the official NAMD page at


Starting from the basic laws of quantum mechanics, Gaussian predicts the energies, molecular structures, and vibrational frequencies of molecular systems, along with numerous molecular properties derived from these basic computation types. It can be used to study molecules and reactions under a wide range of conditions, including both stable species and compounds which are difficult or impossible to observe experimentally such as short-lived intermediates and transition structures.

Gaussian jobs should mainly be run on Deimos. To run it, you have to be in the user group gauss. (You can check this with Linux command groups.) On Deimos, there is a separate queue named gauss, which can be used for time intensive computing that can not be checkpointed.

With module load gaussian (or gaussian/g09) a number of environment variables are set according to the needs of Gaussian. Please, set the directory for temporary data (GAUSS_SCRDIR) manually to somewhere below /fastfs/<username>/.

The following options for the batch job submission command bsub might be particularly useful with Gaussian jobs, see also the chapter BatchSystems.

-R "span[hosts=1] rusage[mem=MEM_MB]"  # MEM_MB: total memory 
                                       # usage in MB
-x                 # use a node exclusively, 
                   # not sharing with other batch jobs
-m NODE_TYPE       # NODE_TYPE can be 'single_hosts', 
                   # 'dual_hosts', 'quad_hosts', or 'fat_quads'
-W hh:mm           # upper bound for wallclock time


GAMESS is an ab-initio quantum mechanics program, which provides many methods for computation of the properties of molecular systems using standard quantum chemical methods. For a detailed description, please look at the GAMESS home page.

For runs with Slurm, please use a script like this:
#SBATCH -t 120
#SBATCH -n 8
#SBATCH --ntasks-per-node=2
# you have to make sure that on each node runs an even number of tasks !!
#SBATCH --mem-per-cpu=1900
module load gamess
rungms.slurm cTT_M_025.inp /scratch/mark/gamess
# the third parameter is the location of the scratch directory

GAMESS should be cited as: M.W.Schmidt, K.K.Baldridge, J.A.Boatz, S.T.Elbert, M.S.Gordon, J.H.Jensen, S.Koseki, N.Matsunaga, K.A.Nguyen, S.J.Su, T.L.Windus, M.Dupuis, J.A.Montgomery, J.Comput.Chem. 14, 1347-1363(1993).


LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. For examples of LAMMPS simulations, documentations, and more visit LAMMPS sites.

On Deimos, it can be run like this:
bsub -n <N> -a openmpi mpirun.lsf  lmp_deimos -in INPUTFILE


ABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave basis. ABINIT also includes options to optimize the geometry according to the DFT forces and stresses, or to perform molecular dynamics simulations using these forces, or to generate dynamical matrices, Born effective charges, and dielectric tensors. Excited states can be computed within the Time-Dependent Density Functional Theory (for molecules), or within Many-Body Perturbation Theory (the GW approximation).

For sequential runs use bsub abinis ..., for runs in parallel:

bsub -n <N> -a openmpi mpirun.lsf abinip ...


CP2K performs atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials.

For sequential runs use:
 bsub cp2k.sopt inputfile= 

for runs in parallel:
bsub -a openmpi -n <N> mpirun.lsf cp2k.popt inputfile

Please keep in mind, that CP2K is currently under heavy development and no official release has been made yet.


The CPMD code is a plane wave/pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics. For examples and documentations see CPMD homepage.

On Deimos, for sequential runs use bsub cpmd.seq inputfile for runs in parallel:
bsub -a openmpi -n <N> mpirun.lsf cpmd.x inputfile

On Mars run CPMD in parallel like bsub -n 64 pamrun cpmd.x infile.inp.


GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

For documentations see Gromacs homepage.

For runs in parallel use:
 bsub -n <N> -a openmpi mpirun.lsf mdrun_s ...


is about to be installed.


Siesta (Spanish Initiative for Electronic Simulations with Thousands of Atoms) is both a method and its computer program implementation, to perform electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids.

In any paper or other academic publication containing results wholly or partially derived from the results of use of the SIESTA package, the following papers must be cited in the normal manner:
  1. "Self-consistent order-N density-functional calculations for very large systems", P. Ordejon, E. Artacho and J. M. Soler, Phys. Rev. B (Rapid Comm.) 53, R10441-10443 (1996).
  2. "The SIESTA method for ab initio order-N materials simulation" J. M. Soler, E. Artacho, J. D. Gale, A. Garcia, J. Junquera, P. Ordejon, and D. Sanchez-Portal, J. Phys.: Condens. Matt. 14, 2745-2779 (2002).


"VAMP/VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set." [Official Site]. It is installed on mars. If you are interested in using VASP on ZIH machines, please contact Dr. Ulf Markwardt.