gromacs/2016: GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of
motion for systems with hundreds to millions of particles.
Starting with the version prior to the 2016 release (the version 5.x series), GROMACS altered the way its programs are
organized and called. Earlier versions used a large collection of individual programs to perform various pre- and post-processing
steps on the simulations. GROMACS has now condensed all of these sub programs into one single program (gmx_mpi) where the functionality
is accessed by command line options.
For example, in the earlier releases the preprocessor, grompp, would have been run like this:
grompp -f file1.mdp -p file2.top -c file3.gro
In the 2016 release, the gmx_mpi program is called with the grompp argument:
To see the complete list of available commands use:
gmx_mpi help commands
To see help for a specific command use:
gmx_mpi help [commandname]
GPU Acceleration
GROMACS has been compiled with support for GPU acceleration. Please refer to the
SCC GPU documentation for more details. If a
GPU is detected on the computer where GROMACS is running it will be automatically used. The GROMACS website has some additional information on their use of GPUS.
To run GROMACS on an SCC node with GPUs, make sure that the number of requested GPUs is equal to the number of MPI processes assigned to the machine. This is due to
the way that the GPU usage is configured - each GPU can only be used by a single process. Multiple threads inside a GROMACS MPI process can share a single GPU, however.
Running with 8 mpi processes on a single node equipped with 8 M2070 GPUs allows for all GPUs to be used. The speedup is significant, using 8 GPUs is 3x faster than
using 8 CPUs running on the M2070 GPUs with the Intel Xeon X5675 processor.
To run GROMACS making use of GPU acceleration, submit it to the queue using
a script like this:
#!/bin/bash -l
#$ -N myjob # Give job a name
#$ -pe mpi_8_tasks_per_node 1 # Run on a single node.
#$ -l gpus=1,gpu_type=M2070 # The value is the # of GPUs/# of MPI processes
# Load GROMACS
module purge
module load gcc/4.9.2
module load openmpi
module load gromacs/2016
# Run the "mdrun" subprogram to run the simulation.
# Set the # of threads per process to 1 with the -ntomp flag.
# Pin the threads to the processors that create them with -pin flag.
mpirun -np 8 gmx_mpi mdrun -s myfile.tpr -ntomp 1 -pin on