Skip to main content

MPI

OpenMPI, MVAPICH2 and IntelMPI libraries are all available through software modules. Each MPI has been built for all of the compiler modules; load a compiler module before loading a MPI in order for the correct MPI module flavour to be used.

All MPI modules make the mpicc, mpicxx, mpif90 and mpif77 wrappers available and will use compilers from the compiler module that has been loaded.

All MPIs will attempt to bind ranks to cores. IntelMPI is most sophisticated for this and we have attempted to configure OpenMPI to provide the same behaviour, i.e.:

  • MPI ranks will be evenly distributed across the available NUMA domains to maximise utilisation of the available memory bandwidth.
  • MPI ranks will be "bunched" such that neighbours in "MPI_COMM_WORLD" will be on the same node and on the same NUMA domain where possible.
  • For situations requiring more than one CPU core per rank, we recommend using the SLURM "-c" option (e.g. #SBATCH -c 16) to set the number of CPU cores that each rank is bound to.

We are still determining the behaviour of MVAPICH2, but it is not as flexible as OpenMPI or IntelMPI.