Return to Section accelerate overview
The USER-OMP package was developed by Axel Kohlmeyer at Temple University. It provides multi-threaded versions of most pair styles, nearly all bonded styles (bond, angle, dihedral, improper), several Kspace styles, and a few fix styles. The package currently uses the OpenMP interface for multi-threading.
Here is a quick overview of how to use the USER-OMP package:
The latter two steps can be done using the "-pk omp" and "-sf omp" command-line switches respectively. Or the effect of the "-pk" or "-sf" switches can be duplicated by adding the package omp or suffix omp commands respectively to your input script.
Required hardware/software:
Your compiler must support the OpenMP interface. You should have one or more multi-core CPUs so that multiple threads can be launched by an MPI task running on a CPU.
Building LAMMPS with the USER-OMP package:
To do this in one line, use the src/Make.py script, described in Section 2.4 of the manual. Type "Make.py -h" for help. If run from the src directory, this command will create src/lmp_omp using src/MAKE/Makefile.mpi as the starting Makefile.machine:
Make.py -p omp -o omp file mpi
Or you can follow these steps:
cd lammps/src make yes-user-omp make machine
The CCFLAGS setting in Makefile.machine needs "-fopenmp" to add OpenMP support. This works for both the GNU and Intel compilers. Without this flag the USER-OMP styles will still be compiled and work, but will not support multi-threading. For the Intel compilers the CCFLAGS setting also needs to include "-restrict".
Run with the USER-OMP package from the command line:
The mpirun or mpiexec command sets the total number of MPI tasks used by LAMMPS (one or multiple per compute node) and the number of MPI tasks used per node. E.g. the mpirun command in MPICH does this via its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
You need to choose how many threads per MPI task will be used by the USER-OMP package. Note that the product of MPI tasks * threads/task should not exceed the physical number of cores (on a node), otherwise performance will suffer.
Use the "-sf omp" command-line switch, which will automatically append "omp" to styles that support it. Use the "-pk omp Nt" command-line switch, to set Nt = # of OpenMP threads per MPI task to use.
lmp_machine -sf omp -pk omp 16 -in in.script # 1 MPI task on a 16-core node mpirun -np 4 lmp_machine -sf omp -pk omp 4 -in in.script # 4 MPI tasks each with 4 threads on a single 16-core node mpirun -np 32 -ppn 4 lmp_machine -sf omp -pk omp 4 -in in.script # ditto on 8 16-core nodes
Note that if the "-sf omp" switch is used, it also issues a default package omp 0 command, which sets the number of threads per MPI task via the OMP_NUM_THREADS environment variable.
Using the "-pk" switch explicitly allows for direct setting of the number of threads and additional options. Its syntax is the same as the "package omp" command. See the package command doc page for details, including the default values used for all its options if it is not specified, and how to set the number of threads via the OMP_NUM_THREADS environment variable if desired.
Or run with the USER-OMP package by editing an input script:
The discussion above for the mpirun/mpiexec command, MPI tasks/node, and threads/MPI task is the same.
Use the suffix omp command, or you can explicitly add an "omp" suffix to individual styles in your input script, e.g.
pair_style lj/cut/omp 2.5
You must also use the package omp command to enable the USER-OMP package, unless the "-sf omp" or "-pk omp" command-line switches were used. It specifies how many threads per MPI task to use, as well as other options. Its doc page explains how to set the number of threads via an environment variable if desired.
Speed-ups to expect:
Depending on which styles are accelerated, you should look for a reduction in the "Pair time", "Bond time", "KSpace time", and "Loop time" values printed at the end of a run.
You may see a small performance advantage (5 to 20%) when running a USER-OMP style (in serial or parallel) with a single thread per MPI task, versus running standard LAMMPS with its standard (un-accelerated) styles (in serial or all-MPI parallelization with 1 task/core). This is because many of the USER-OMP styles contain similar optimizations to those used in the OPT package, as described above.
With multiple threads/task, the optimal choice of MPI tasks/node and OpenMP threads/task can vary a lot and should always be tested via benchmark runs for a specific simulation running on a specific machine, paying attention to guidelines discussed in the next sub-section.
A description of the multi-threading strategy used in the USER-OMP package and some performance examples are presented here
Guidelines for best performance:
For many problems on current generation CPUs, running the USER-OMP package with a single thread/task is faster than running with multiple threads/task. This is because the MPI parallelization in LAMMPS is often more efficient than multi-threading as implemented in the USER-OMP package. The parallel efficiency (in a threaded sense) also varies for different USER-OMP styles.
Using multiple threads/task can be more effective under the following circumstances:
Additional performance tips are as follows:
Restrictions:
None.