Hi,
Probably you have some strange character after "on" if you edited the file
on Windows or pasted the line from elsewhere
Mark
On Thu, 13 Jul 2017 07:22 Alex wrote:
> Can you try to open the script in vi, delete the mdrun line and then
> manually retype it?
>
>
> On
Can you try to open the script in vi, delete the mdrun line and then
manually retype it?
On 7/12/2017 11:03 PM, leila karami wrote:
Dear Gromacs users,
I am doing md simulation on Gromacs 5.1.3. on GPU in Rocks cluster system using
command:
gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16
Dear Gromacs users,
I am doing md simulation on Gromacs 5.1.3. on GPU in Rocks cluster system using
command:
gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on
All things are ok.
When I use this command in a script to do md simulation by queuing system:
Hi,
Making your run stay to the cores it is assigned is always a good idea, and
using -pin on is a good way to do it. If there's more than that job on the
node, then it is more complicated than that. More information here
*Dear Szilárd ,*
*Thanx for your answer. *
*For the following command, should I use -pin on?*
gmx_mpi mdrun -nb gpu -v -deffnm gpu_md -ntomp 32 -gpu_id 0
Best wishes
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List
Leila,
If you want to use only one GPU, pass that GPU's ID to mdrun, e.g. -gpu_id
0 for the 1st one. You'll also want to pick the right number of cores for
the run, it will surely not make sense to use all 96. Also make sure to pin
the threads (-pin on).
However, I strongly recommend that you
*Dear Nikhil and Szilárd,*
Thanks for your answers.
I want to use only one of GPUs (for example ID =1).
Should I use option -gpu_id 1?
Information of my system hardware is as follows:
Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
Hardware detected on host
You've got a pretty strange beast there with 4 CPU sockets 24 cores each,
one very fast GPU and two rather slow ones (about 3x slower than the first).
If you want to do a single run on this machine, I suggest trying to
partition the rank across the GPUs so that you get a decent balance, e.g.
you
Hi,
you can try using
ntmpi XX ntomp XXX and try to use the combinations for more details.
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html.
Further, I think it's better to use 2 x Tesla K40 instead of using all
three. You may see a performance reduction due to load
Dear Gromacs users,
I installed Gromacs 5.1.3. on GPU in Rocks cluster system.
After using command:
gmx_mpi mdrun -nb gpu -v -deffnm old_gpu,
I encountered with:
=
GROMACS: gmx mdrun, VERSION 5.1.3
Executable:
10 matches
Mail list logo