ALERT! Warning: your browser isn't supported. Please install a modern one, like Firefox, Opera, Safari, Chrome or the latest Internet Explorer. Thank you!
Startseite » ... » Zentrale Einrichtungen  » ZIH  » Wiki
phone prefix: +49 351 463.....

HPC Support

Operation Status

Ulf Markwardt: 33640
Claudia Schmidt: 39833 hpcsupport@zih.tu-dresden.de

Login and project application

Phone: 40000
Fax: 42328
servicedesk@tu-dresden.de

You are here: Compendium » Applications » CFD

Computational Fluid Dynamics (CFD)

  Taurus Venus Module
OpenFOAM x   openfoam
CFX x x ansys
Fluent x x ansys
ICEM CFD x x ansys
STAR-CCM+ x   star

OpenFOAM

The OpenFOAMŪ (Open Field Operation and Manipulation) CFD Toolbox can simulate anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics, electromagnetics and the pricing of financial options. OpenFOAM is produced by Ltd and is freely available and open source, licensed under the GNU General Public Licence.

Example for openfoam:


#!/bin/bash
#SBATCH --time=12:00:00 # walltime
#SBATCH --ntasks=60 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=500M # memory per CPU core
#SBATCH -J "Test" # job name
#SBATCH --mail-user=mustermann@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
OUTFILE="Output"
module load openfoam
cd /scratch/<YOURUSERNAME> # work directory in /scratch...!
srun pimpleFoam -parallel > "$OUTFILE"
exit 0

Ansys CFX

Ansys CFX is a powerful finite-volume-based program package for modeling general fluid flow in complex geometries. The main components of the CFX package are the flow solver cfx5solve, the geometry and mesh generator cfx5pre, and the post-processor cfx5post.

--> Example to start CFX as Batchjob:
#!/bin/bash
#SBATCH --time=12:00 # walltime
#SBATCH --ntasks=4 # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M # memory per CPU core
#SBATCH --mail-user=.......@tu-dresden.de # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load ansys
cd /scratch/<YOURUSERNAME> # work directory in /scratch...!
cfx-parallel.sh -double -def StaticMixer.def

Ansys Fluent

Fluent need the hostnames and can be run in parallel like this:
#!/bin/bash
#SBATCH --time=12:00                                       # walltime
#SBATCH --ntasks=4                                         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M                                # memory per CPU core
#SBATCH --mail-user=.......@tu-dresden.de                  # email address (only tu-dresden)
#SBATCH --mail-type=ALL
module load ansys

nodeset -e $SLURM_JOB_NODELIST | xargs -n1 > hostsfile_job_$SLURM_JOBID.txt

fluent 2ddp -t$SLURM_NTASKS -g -mpi=intel -pinfiniband -cnf=hostsfile_job_$SLURM_JOBID.txt < input.in

STAR-CCM+

Note: you have to use your own license in order to run STAR-CCM+ on Taurus, so you have to specify the parameters -licpath and -podkey, see the example below.

Our installation provides a script starccm_hosts.pl that generates a hostlist from the SLURM job environment that can be passed to starccm+ enabling it to run across multiple nodes.

#!/bin/bash
#SBATCH --time=12:00                                       # walltime
#SBATCH --ntasks=32                                         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=2500M                                # memory per CPU core
#SBATCH --mail-user=.......@tu-dresden.de                  # email address (only tu-dresden)
#SBATCH --mail-type=ALL

module load star

LICPATH="port@host"
PODKEY="your podkey"
INPUT_FILE="your_simulation.sim"

starccm+ -collab -rsh ssh -cpubind off -np $SLURM_NTASKS -on $(starccm_hosts.pl $SLURM_JOB_ID) -batch -power -licpath $LICPATH -podkey $PODKEY $INPUT_FILE