As a side-note, your mdrun invocation does not seem suitable for GPU
accelerated runs, you'd most likely be better of running fewer ranks.
--
Szilárd
On Fri, Mar 23, 2018 at 9:26 PM, Christopher Neale
wrote:
> Hello,
>
> I am running gromacs 5.1.2 on single nodes where the run is set to use 32
Hi,
Looks like rogue behavior from the GPU driver's last workload, or something
like that. cudaMallocHost asks the driver to allocate memory on the CPU in
a special way, but for GROMACS that can never run into e.g. lack of
resources.
Mark
On Fri, Mar 23, 2018, 21:27 Christopher Neale
wrote:
>
Hello,
I am running gromacs 5.1.2 on single nodes where the run is set to use 32 cores
and 4 GPUs. The run command is:
mpirun -np 32 gmx_mpi mdrun -deffnm MD -maxh $maxh -dd 4 4 2 -npme 0 -gpu_id
-ntomp 1 -notunepme
Some of my runs die with this error:
cudaMall