计算化学公社

 找回密码 Forget password
 注册 Register
Views: 7832|回复 Reply: 1
打印 Print 上一主题 Last thread 下一主题 Next thread

[GROMACS] 请问gromacs里面是不是有些计算不能使用gpu加速?

[复制链接 Copy URL]

166

帖子

0

威望

3353

eV
积分
3519

Level 5 (御坂)

跳转到指定楼层 Go to specific reply
楼主
本帖最后由 neocc 于 2019-4-22 02:58 编辑

请问社长大人,gromacs里面是不是有些计算不能使用gpu加速??


使用gpu去计算有如下报错:
Inconsistency in user input:
Nonbonded interactions on the GPU were required, but not supported for these
simulation settings. Change your settings, or do not require using GPUs.
还有
Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none


不确定哪些可以用cpu,哪些可以用gpu,是不是要用 -nb auto 命令??

166

帖子

0

威望

3353

eV
积分
3519

Level 5 (御坂)

2#
 楼主 Author| 发表于 Post on 2019-4-22 02:56:06 | 只看该作者 Only view this author
抱歉没有注意看手册

Running mdrun with GPUs
NVIDIA GPUs from the professional line (Tesla or Quadro) starting with the Kepler generation (compute capability 3.5 and later) support changing the processor and memory clock frequency with the help of the applications clocks feature. With many workloads, using higher clock rates than the default provides significant performance improvements. For more information see the NVIDIA blog article on this topic. For GROMACS the highest application clock rates are optimal on all hardware available to date (up to and including Maxwell, compute capability 5.2).

Application clocks can be set using the NVIDIA system managemet tool nvidia-smi. If the system permissions allow, gmx mdrun has built-in support to set application clocks if built with NVML support. Note that application clocks are a global setting, hence affect the performance of all applications that use the respective GPU(s). For this reason, gmx mdrun sets application clocks at initialization to the values optimal for GROMACS and it restores them before exiting to the values found at startup, unless it detects that they were altered during its runtime.

Types of GPU tasks
To better understand the later sections on different GPU use cases for calculation of short range and PME, we first introduce the concept of different GPU tasks. When thinking about running a simulation, several different kinds of interactions between the atoms have to be calculated (for more information please refer to the reference manual). The calculation can thus be split into several distinct parts that are largely independent of each other (hence can be calculated in any order, e.g. sequentially or concurrently), with the information from each of them combined at the end of time step to obtain the final forces on each atom and to propagate the system to the next time point. For a better understanding also please see the section on domain decomposition.

Of all calculations required for an MD step, GROMACS aims to optimize performance bottom-up for each step from the lowest level (SIMD unit, cores, sockets, accelerators, etc.). Therefore much of the indivdual computation units are highly tuned for the lowest level of hardware parallelism: the SIMD units. Additionally, with GPU accelerators used as co-processors, some of the work can be offloaded, that is calculated simultaneously/concurrently with the CPU on the accelerator device, with the result being communicated to the CPU. Right now, GROMACS supports GPU accelerator offload of two tasks: the short-range nonbonded interactions in real space, and PME.

Please note that the solving of PME on GPU is still only the initial version supporting this behaviour, and comes with a set of limitations outlined further below.

Right now, we generally support short-range nonbonded offload with and without dynamic pruning on a wide range of GPU accelerators (both NVIDIA and AMD). This is compatible with the grand majority of the features and parallelization modes and can be used to scale to large machines.

Simultaneously offloading both short-range nonbonded and long-range PME work to GPU accelerators is a new feature that that has some restrictions in terms of feature and parallelization compatibility (please see the section below).

GPU computation of short range nonbonded interactions
Using the GPU for the short-ranged nonbonded interactions provides the majority of the available speed-up compared to run using only the CPU. Here, the GPU acts as an accelerator that can effectively parallelize this problem and thus reduce the calculation time.

GPU accelerated calculation of PME
Recent additions to GROMACS now also allow the off-loading of the PME calculation to the GPU, to further reduce the load on the CPU and improve usage overlap between CPU and GPU. Here, the solving of PME will be performed in addition to the calculation of the short range interactions on the same GPU as the short range interactions.

Known limitations
Please note again the limitations outlined below!

Only compilation with CUDA is supported.
Only a PME order of 4 is supported on GPUs.
PME will run on a GPU only when exactly one rank has a PME task, ie. decompositions with multiple ranks doing PME are not supported.
Only single precision is supported.
Free energy calculations where charges are perturbed are not supported, because only single PME grids can be calculated.
LJ PME is not supported on GPUs.
Assigning tasks to GPUs
Depending on which tasks should be performed on which hardware, different kinds of calculations can be combined on the same or different GPUs, according to the information provided for running mdrun.

It is possible to assign the calculation of the different computational tasks to the same GPU, meaning that they will share the computational resources on the same device, or to different processing units that will each perform one task each.

One overview over the possible task assignments is given below:

GROMACS version 2018:

Two different types of GPU accelerated tasks are available, NB and PME. Each PP rank has a NB task that can be offloaded to a GPU. If there is only one rank with a PME task (including if that rank is a PME-only rank), then that task can be offloaded to a GPU. Such a PME task can run wholly on the GPU, or have its latter stages run only on the CPU.

Limitations are that PME on GPU does not support PME domain decomposition, so that only one PME task can be offloaded to a single GPU assigned to a separate PME rank, while NB can be decomposed and offloaded to multiple GPUs.

Performance considerations for GPU tasks
The performace balance depends on how many (and how fast) CPU cores you have, vs. how many and how fast the GPUs are that you have.
With slow/old GPUs and/or fast/modern CPUs with many cores, it might make more sense to let the CPU do PME calculation, with the GPUs focused on the calculation of the NB.
With fast/modern GPUs and/or slow/old CPUs with few cores, it generally helps to have the GPU do PME.
It is possible to use multiple GPUs with PME offload by letting e.g. 3 MPI ranks use one GPU each for short-range interactions, while a fourth rank does the PME on its GPU.
The only way to know for sure what alternative is best for your machine is to test and check performance.
Reducing overheads in GPU accelerated runs
In order for CPU cores and GPU(s) to execute concurrently, tasks are launched and executed asynchronously on the GPU(s) while the CPU cores execute non-offloaded force computation (like long-range PME electrostatics). Asynchronous task launches are handled by GPU device driver and require CPU involvement. Therefore, the work of scheduling GPU tasks will incur an overhead that can in some cases significantly delay or interfere with the CPU execution.

Delays in CPU execution are caused by the latency of launching GPU tasks, an overhead that can become significant as simulation ns/day increases (i.e. with shorter wall-time per step). The overhead is measured by gmx mdrun and reported in the performance summary section of the log file (“Launch GPU ops” row). A few percent of runtime spent in this category is normal, but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed. In general, there a user can do little to avoid such overheads, but there are a few cases where tweaks can give performance benefits. In single-rank runs timing of GPU tasks is by default enabled and, while in most cases its impact is small, in fast runs performance can be affected. The performance impact will be most significant on NVIDIA GPUs with CUDA, less on AMD with OpenCL. In these cases, when more than a few percent of “Launch GPU ops” time is observed, it is recommended turning off timing by setting the GMX_DISABLE_GPU_TIMING environment variable. In parallel runs with with many ranks sharing a GPU launch overheads can also be reduced by staring fewer thread-MPI or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.

The second type of overhead, interference of the GPU driver with CPU computation, is caused by the scheduling and coordination of GPU tasks. A separate GPU driver thread can require CPU resources which may clash with the concurrently running non-offloaded tasks, potentially degrading the performance of PME or bonded force computation. This effect is most pronounced when using AMD GPUs with OpenCL with older driver releases (e.g. fglrx 12.15). To minimize the overhead it is recommended to leave a CPU hardware thread unused when launching gmx mdrun, especially on CPUs with high core count and/or HyperThreading enabled. E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU, try gmx mdrun -ntomp 7 -pin on. This will leave free CPU resources for the GPU task scheduling reducing interference with CPU computation. Note that assigning fewer resources to gmx mdrun CPU computation involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead, in particular without HyperThreading and with few CPU cores.

TODO In future patch: any tips not covered above

Running the OpenCL version of mdrun
The current version works with GCN-based AMD GPUs, and NVIDIA CUDA GPUs. Make sure that you have the latest drivers installed. For AMD GPUs, the compute-oriented ROCm stack is recommended; alternatively, the AMDGPU-PRO stack is also compatible; using the outdated and unsupported fglrx proprietary driver and runtime is not recommended (but for certain older hardware that may be the only way to obtain support). In addition Mesa version 17.0 or newer with LLVM 4.0 or newer is also supported. For NVIDIA GPUs, using the proprietary driver is required as the open source nouveau driver (available in Mesa) does not provide the OpenCL support. The minimum OpenCL version required is 1.1. See also the known limitations.

Devices from the AMD GCN architectures (all series) are compatible and regularly tested; NVIDIA Fermi and later (compute capability 2.0) are known to work, but before doing production runs always make sure that the GROMACS tests pass successfully on the hardware.

The OpenCL GPU kernels are compiled at run time. Hence, building the OpenCL program can take a few seconds introducing a slight delay in the gmx mdrun startup. This is not normally a problem for long production MD, but you might prefer to do some kinds of work, e.g. that runs very few steps, on just the CPU (e.g. see -nb above).

The same -gpu_id option (or GMX_GPU_ID environment variable) used to select CUDA devices, or to define a mapping of GPUs to PP ranks, is used for OpenCL devices.

Some other OpenCL management environment variables may be of interest to developers.

Known limitations of the OpenCL support
Limitations in the current OpenCL support of interest to GROMACS users:

PME GPU offload is not supported with OpenCL.
No Intel devices (CPUs, GPUs or Xeon Phi) are supported
Due to blocking behavior of some asynchronous task enqueuing functions in the NVIDIA OpenCL runtime, with the affected driver versions there is almost no performance gain when using NVIDIA GPUs. The issue affects NVIDIA driver versions up to 349 series, but it known to be fixed 352 and later driver releases.
On NVIDIA GPUs the OpenCL kernels achieve much lower performance than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL compiler.
Limitations of interest to GROMACS developers:

The current implementation is not compatible with OpenCL devices that are not using warp/wavefronts or for which the warp/wavefront size is not a multiple of 32

手机版 Mobile version|北京科音自然科学研究中心 Beijing Kein Research Center for Natural Sciences|京公网安备 11010502035419号|计算化学公社 — 北京科音旗下高水平计算化学交流论坛 ( 京ICP备14038949号-1 )|网站地图

GMT+8, 2025-8-18 06:28 , Processed in 0.164726 second(s), 21 queries , Gzip On.

快速回复 返回顶部 返回列表 Return to list