Skip to content

Computational Fluid Dynamics (CFD)

The following CFD applications are available on our system:

Module
OpenFOAM openfoam
CFX ansys
Fluent ansys
ICEM CFD ansys
STAR-CCM+ star

OpenFOAM

The OpenFOAM (Open Field Operation and Manipulation) CFD Toolbox can simulate anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics, electromagnetics and the pricing of financial options. OpenFOAM is developed primarily by OpenCFD Ltd and is freely available and open-source, licensed under the GNU General Public License.

The command module spider OpenFOAM provides the list of installed OpenFOAM versions. In order to use OpenFOAM, it is mandatory to set the environment by sourcing the bashrc (for users running bash or ksh) or cshrc (for users running tcsh or csh) provided by OpenFOAM:

marie@login$ module load OpenFOAM/VERSION
marie@login$ source $FOAM_BASH
marie@login$ # source $FOAM_CSH
Example for OpenFOAM job script:
#!/bin/bash
#SBATCH --time=12:00:00     # walltime
#SBATCH --ntasks=60         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=500M  # memory per CPU core
#SBATCH --job-name="Test"   # job name
#SBATCH --mail-user=marie@tu-dresden.de  # email address (only tu-dresden)
#SBATCH --mail-type=ALL

OUTFILE="Output"
module load OpenFOAM
source $FOAM_BASH
cd /horse/ws/marie-example-workspace  # work directory using workspace
srun pimpleFoam -parallel > "$OUTFILE"

Ansys CFX

Ansys CFX is a powerful finite-volume-based program package for modeling general fluid flow in complex geometries. The main components of the CFX package are the flow solver cfx5solve, the geometry and mesh generator cfx5pre, and the post-processor cfx5post.

Example for CFX job script:
#!/bin/bash
#SBATCH --time=12:00                                       # walltime
#SBATCH --ntasks=4                                         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M                                # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de                    # email address (only tu-dresden)
#SBATCH --mail-type=ALL

module load ANSYS
cd /horse/ws/marie-example-workspace                       # work directory using workspace
cfx-parallel.sh -double -def StaticMixer.def

Ansys Fluent

Fluent needs the host names and can be run in parallel like this:
#!/bin/bash
#SBATCH --time=12:00                        # walltime
#SBATCH --ntasks=4                          # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=1900M                 # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de     # email address (only tu-dresden)
#SBATCH --mail-type=ALL

module purge
module load release/23.10
module load ANSYS/2023R1
fluent 2ddp -t$SLURM_NTASKS -g -mpi=openmpi -pinfiniband -cnf=$(/software/util/slurm/bin/create_rankfile -f CCM) -i input.jou

To use fluent interactively, please try:

marie@login$ module load ANSYS/19.2
marie@login$ srun --nodes=1 --cpus-per-task=4 --time=1:00:00 --pty --x11=first bash
marie@compute$ fluent &

STAR-CCM+

Note

You have to use your own license in order to run STAR-CCM+ on ZIH systems, so you have to specify the parameters -licpath and -podkey, see the example below.

Our installation provides a script create_rankfile -f CCM that generates a host list from the Slurm job environment that can be passed to starccm+, enabling it to run across multiple nodes.

Example
#!/bin/bash
#SBATCH --time=12:00                        # walltime
#SBATCH --ntasks=32                         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=2500M                 # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de     # email address (only tu-dresden)
#SBATCH --mail-type=ALL

module load STAR-CCM+

LICPATH="port@host"
PODKEY="your podkey"
INPUT_FILE="your_simulation.sim"
starccm+ -collab -rsh ssh -cpubind off -np $SLURM_NTASKS -on $(/sw/taurus/tools/slurmtools/default/bin/create_rankfile -f CCM) -batch -power -licpath $LICPATH -podkey $PODKEY $INPUT_FILE

Note

The software path of the script create_rankfile -f CCM is different on the new HPC system Barnard.

Example
#!/bin/bash
#SBATCH --time=12:00                        # walltime
#SBATCH --ntasks=32                         # number of processor cores (i.e. tasks)
#SBATCH --mem-per-cpu=2500M                 # memory per CPU core
#SBATCH --mail-user=marie@tu-dresden.de     # email address (only tu-dresden)
#SBATCH --mail-type=ALL

module load STAR-CCM+

LICPATH="port@host"
PODKEY="your podkey"
INPUT_FILE="your_simulation.sim"
starccm+ -collab -rsh ssh -cpubind off -np $SLURM_NTASKS -on $(/software/util/slurm/bin/create_rankfile -f CCM) -batch -power -licpath $LICPATH -podkey $PODKEY $INPUT_FILE