Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hi, I did by "gmx gangle ..." command. But I am not getting good result, because I have a doubt in ndx file. Can you help me for making ndx file. How should I select the atom for vectors. Thanks On Mon, Jul 29, 2019, 22:50 David van der Spoel wrote: > Den 2019-07-29 kl. 18:26, skrev Omkar

Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread David van der Spoel
Den 2019-07-29 kl. 18:26, skrev Omkar Singh: Hi, Meaning is that If I want to calculate angle between OH, HH and dip vector with positive Z-axis. How can I make index file for this issue? And is it possible that the angle distribution of these vectors for bulk water aproximatly linear. Hope now

[gmx-users] cvff question

2019-07-29 Thread Yi Lu
Dear all, These days I want to use cvff force field but in gromacs there is no relative files for it. So I want to know if I can add it and how to make it. Thanks sincerely, YiLu -- Gromacs Users mailing list * Please search the archive at

Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hi, Meaning is that If I want to calculate angle between OH, HH and dip vector with positive Z-axis. How can I make index file for this issue? And is it possible that the angle distribution of these vectors for bulk water aproximatly linear. Hope now question is clear. Thanks On Mon, Jul 29,

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Mark Abraham
Hi, Yes and the -nmpi I copied from Carlos's post is ineffective - use -ntmpi Mark On Mon., 29 Jul. 2019, 15:15 Justin Lemkul, wrote: > > > On 7/29/19 8:46 AM, Carlos Navarro wrote: > > Hi Mark, > > I tried that before, but unfortunately in that case (removing —gres=gpu:1 > > and including

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Szilárd Páll
Carlos, You can accomplish the same using the multi-simulation feature of mdrun and avoid having to manually manage the placement of runs, e.g. instead of the above you just write gmx mdrun_mpi -np N -multidir $WORKDIR1 $WORKDIR2 $WORKDIR3 ... For more details see

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Thank you On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul, wrote: > > > On 7/29/19 7:55 AM, Bratin Kumar Das wrote: > > Hi Szilard, > > Thank you for your reply. I rectified as you said. For > trial > > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is > running > >

Re: [gmx-users] remd error

2019-07-29 Thread Justin Lemkul
On 7/29/19 7:55 AM, Bratin Kumar Das wrote: Hi Szilard, Thank you for your reply. I rectified as you said. For trial purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running or not. I gave the following command to run remd *mpirun -np 8 gmx_mpi_d mdrun -v

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Justin Lemkul
On 7/29/19 8:46 AM, Carlos Navarro wrote: Hi Mark, I tried that before, but unfortunately in that case (removing —gres=gpu:1 and including in each line the -gpu_id flag) for some reason the jobs are run one at a time (one after the other), so I can’t use properly the whole node. You need to

Re: [gmx-users] maximum force does not converge

2019-07-29 Thread Justin Lemkul
On 7/29/19 3:24 AM, m g wrote: Dear all, I'm simulating a MOF by UFF force field, but in energy minimization step I gave an error as "Steepest Descents converged to machine precision in 2115 steps, but did not reach the requested Fmax < 1000", although the potential energy was converged. I

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Carlos Navarro
Hi Mark, I tried that before, but unfortunately in that case (removing —gres=gpu:1 and including in each line the -gpu_id flag) for some reason the jobs are run one at a time (one after the other), so I can’t use properly the whole node. —— Carlos Navarro Retamal Bioinformatic Engineering.

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Hi Szilard, Thank you for your reply. I rectified as you said. For trial purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running or not. I gave the following command to run remd *mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd* After giving the

Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread David van der Spoel
Den 2019-07-29 kl. 12:24, skrev Omkar Singh: Hello everyone, Is it possible that the probability distribution of HH, OH vector for bulk water is approximately linear? What do you mean? -- David van der Spoel, Ph.D., Professor of Biology Head of Department, Cell & Molecular Biology, Uppsala

[gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hello everyone, Is it possible that the probability distribution of HH, OH vector for bulk water is approximately linear? -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Mark Abraham
Hi, When you use DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 " then the environment seems to make sure only one GPU is visible. (The log files report only finding one GPU.) But it's probably the same GPU in each case, with three remaining idle. I would suggest not using --gres unless you

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Carlos Navarro
Hi Szilárd, To answer your questions: **are you trying to run multiple simulations concurrently on the same node or are you trying to strong-scale? I'm trying to run multiple simulations on the same node at the same time. ** what are you simulating? Regular and CompEl simulations ** can you

[gmx-users] maximum force does not converge

2019-07-29 Thread m g
Dear all, I'm simulating a MOF by UFF force field, but in energy minimization step I gave an error as "Steepest Descents converged to machine precision in 2115 steps, but did not reach the requested Fmax < 1000", although the potential energy was converged. I used SPC/E water for this system.

Re: [gmx-users] space dependent electric field

2019-07-29 Thread David van der Spoel
Den 2019-07-29 kl. 04:24, skrev Maryam: Dear all, I want to apply a space but not time dependent electric field to my system. I reviewed the source code of the electric field but it only has constant and time dependent EF (pulsed EF). Can anyone help me find out how I can change the source code