We currently have modules for:
- AOCL (AMD Optimised CPU Libraries).
- MKL (Intel Math Kernel Libraries).
The SLURM queues set appropriate values for OMP_NUM_THREADS, MKL_NUM_THREADS, OPENBLAS_NUM_THREADS and BLIS_NUM_THREADS to provide the expected behaviour under most circumstances.
Note that the version of MKL is the last one that is able to use AVX2 CPU instructions on AMD hardware so will be the only version of MKL installed on Hamilton 8. Some software available for download on the web, notably Intel's own distribution of Python, embeds a more recent MKL and so may not run as quickly as intended.
The modules for all BLAS/LAPACK implementations set the following environment variables:
- ARC_LINALG_CFLAGS - Flags required to link a C program against its serial BLAS/LAPACK implementation.
- ARC_LINALG_FFLAGS - Flags required to link a Fortran program against its serial BLAS/LAPACK implementation.
- ARC_LINALG_MT_CFLAGS - Flags required to link a C program against its multi-threaded BLAS/LAPACK implementation.
- ARC_LINALG_MT_FFLAGS - Flags required to link a Fortran program against its multi-threaded BLAS/LAPACK implementation.
Where a library does not provide a serial BLAS/LAPACK implementation, the ARC_LINALG_*FLAGS variables will link to the multi-threaded version.
Where a library does not provide a multi-threaded BLAS/LAPACK implementation, ARC_LINALG_MT_*FLAGS variables will link to the serial version.
It should therefore be safe to link against ARC_LINALG_MT_*FLAGS, even when intending to work in non-multithreaded mode.
These environment variables should assist in making compilation scripts that are relatively agnostic to the compiler, MPI and BLAS/LAPACK implementation. For example:
- To compile a C program:
$CC -o program program.c $ARC_LINALG_MT_CFLAGS
- To compile an MPI Fortran program:
mpif90 -o mpiprogram mpiprogram.f90 $ARC_LINALG_MT_FFLAGS
Such commands would work regardless of which compiler, MPI and BLAS/LAPACK modules are loaded.