Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Advanced Research Computing

Gromacs

Gromacs is available via a software module, which must be loaded in order to make the software accessible. The command module avail gromacs will show which versions have been installed, and then an appropriate module can be loaded, e.g:


module avail gromacs
module load gromacs/2021.4

Gromacs can be run in two ways:

  • For all mdrun jobs, use mpirun mdrun_mpi (or mpirun mdrun_mpi_d for double precision) instead of gmx mdrun, even for non-parallel cases.

  • Other Gromacs functionality is available via the commands gmx (for single precision) and gmx_d (double precision).

Sample job scripts are included below; please also look at the Running Jobs page for further advice on how to configure jobs.

Example job for mdrun

#!/bin/bash
# example job script for Gromacs (mdrun)
#SBATCH -p shared # Slurm queue/partition. Default is the 'shared' partition
#SBATCH -t 00-01:00:00 # job time limit, in format dd-hh:mm:ss. Default is 1 hour.
#SBATCH --mem=1G # RAM required per node, in units k,M,G or T.

# Define how the job is parallelised, with (-n) MPI ranks and (-c) threads per rank
# distributed over (-N) nodes. Experiment if necessary to find a configuration that best
# suits your case. Note that Gromacs has a limit of 64 threads per rank.
#SBATCH -n 1
#SBATCH -c 1
#SBATCH -N 1

module purge
module load gromacs/2021.4

# Launch mdrun. The numbers of MPI ranks and threads per rank are set automatically
# using the configuration requested above, so they need not be specified below.

mpirun mdrun_mpi <gromacs options>


Example job for Gromacs (not mdrun)

#!/bin/bash
# example script for Gromacs (not mdrun)
#SBATCH -p shared # Slurm queue/partition. Default is the 'shared' partition
#SBATCH -t 00-01:00:00 # job time limit, in format dd-hh:mm:ss. Default is 1 hour.
#SBATCH --mem=1G # RAM required per node, in units k,M,G or T.

module purge
module load gromacs/2021.4

# For all mdrun jobs, use 'mpirun mdrun_mpi' (single precision) or 'mpirun mdrun_mpi_d' (double
# precision) instead of 'gmx mdrun', even for non-parallel cases.

gmx <gromacs options>