Re: [gmx-users] autocorrelation function and residence time

2017-09-21 Thread Tasneem Kausar
Thank you for reply

I read the references and I know that one of the columns of g_hbond output
is without subtraction
 and the value ranges between 0 to 1. The only help I need is that can I
fit the curve of the first column (which has negative value) and neglect
the range of curve of negative part while fitting to get the exponential
parameters of the curve. This seems to me reasonable only for getting
information parameter as negative ACF value mean no correlation parameter.

Any help will be appreciated.

Thanks in Advance.

On Thu, Sep 21, 2017 at 2:54 PM, Erik Marklund 
wrote:

> Dear Tasneem,
>
> Quite often ACF calculations involve subtraction of the average signal,
> and this normally renders some negative values in the ACF. It’s been a bit
> too long since I dealt with the gmx hbond code, but I suspect that is what
> is going on here. I suggest reading the references that gmx hbond mentions,
> where the four quantities in the output are defined.
>
> Kind regards,
> Erik
> __
> Erik Marklund, PhD, Marie Skłodowska Curie INCA Fellow
> Department of Chemistry – BMC, Uppsala University
> +46 (0)18 471 4539
> erik.markl...@kemi.uu.se
>
> On 21 Sep 2017, at 11:12, Tasneem Kausar  mailto:tasneemkausa...@gmail.com>> wrote:
>
> Still waiting for suggestions.
>
> On Wed, Sep 20, 2017 at 9:42 AM, Tasneem Kausar  >
> wrote:
>
> Dear all
>
> I want to calculate residence time of interface water molecules at protein
> interface. I am using Gromacs-4.6.4.
> I am using using following command
> g_hbond -s protein.tpr -f protein.xtc -b 2000 -n proein.ndx -ac
> protein_ac.xvg -contact
> In the index file there are protein interface residues and  8 water
> molecule that are present at the protein interface. I have selected protein
> interface and 8 water for calculation. In the output of autocorrelation
> there are four y axis columns. I came through reply of Erik. He mentioned
> the effect of periodic boundary condition on the output. In my case first y
> axis has several negative values. I want to do an exponential fit with
> function y=exp(-(x/t)^n) to obtain value of t and b. Can we skip the
> negative values of the output file? If yes what is the reason to do that.
> If I am wrong please suggest me to do the right way to obtain residence
> time.
>
>
> Thanks in Advance
>
> Tasneem Kausar
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org gromacs.org>.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] performance

2017-09-21 Thread gromacs query
Hi Szilárd,

Thanks a lot for your time, and see my replies below. Overall they are very
useful and I hope this long carried over discussion email will serve for
the future users. (Also could you please see my other email pointing
errors(?)/repeats in the web documentation about performance)

'multi/multidir' is not much helpful in my case as my simulation crashes
sometimes, and to restart them would be a pain as there are many (many!)
simulations. Also, one is never sure if other users will impose
-multi/-multidir option or not on shared node clusters. I have read your
other email suggestions [tagged: the importance of process/thread affinity,
especially in node sharing setups] where node sharing among different users
could be an issue which would ultimately depend on job scheduler.

My replies are inserted here:


On Thu, Sep 21, 2017 at 4:54 PM, Szilárd Páll 
wrote:

> Hi,
>
> A few remarks in no particular order:
>
> 1. Avoid domain-decomposition unless necessary (especially in
> CPU-bound runs, and especially with PME), it has a non-negligible
> overhead (greatest when going from no DD to using DD). Running
> multi-threading only typically has better performance. There are
> exceptions (e.g. your case of reaction-field runs could be such a
> case, but I'm doubtful as the DD cost is signiificant). Hence, I
> suggest trying 1, 2, 4... ranks per simulation, i.e.
> mpirun -np 1 gmx mdrun -ntomp N (single-run)
> mpirun -np 2 gmx mdrun -ntomp N/2 (single-run)
> mpirun -np 4 gmx mdrun -ntomp N/4 (single-run)
> [...]
> The multi-run equivalents of the above would simply use M ranks where
> M=Nmulti * Nranks_per_run.


You mean -dlb no? I think I did not modify so should be on auto mode then.
I can try it though. And yes indeed I have tried many other cases where I
vary -np gradually. I just shared one of the glitchy performance issues [I
have wealth of such cases :)]. Which I suspect now is a slurm scheduler
issue. I need to ask Admin if there are affinities to cores for a job.


> 2. If you're aiming for best throughput place two or more
> _independent_ runs on the same GPU, e.g. assuming 4 GPUs + 40 cores
> (and that no DD turns out to be best) to run 2 sim/GPU you can do:
> mpirun -np 8 -multi 8 gmx mdrun [-ntomp 5] [-gpu_id 00112233]
> The last two args can be omitted, but you should make sure that's what
> you get, i.e. that sim #0/#1 use GPU #0, sim #2/#3 use GPU#1, etc.
>

 I am avoiding multi option as explained above. But this is useful.


> 3. 2a,b are clearly off, my hypothesis is still that they get pinned
> to the wrong cores. I suspect 6a,b are just lucky and happen to not be
> placed too badly. Plus 6 use 4 GPUs vs 7 only 2 GPUs, so that's not a
> fair comparison (and probably explains the 350 vs 300 ns/day).
>

Ah Sorry! yes my fault. I just checked 7th case uses 2 GPU. I forgot to
change the GPU numbers.


>
> 4. -pin on is faster than letting the scheduler place jobs (e.g. 3ab
> vs 4b) which is in line with what I would expect.
>


> 5. The strange asymmetry in 8a vs 8b is due to 8b having failed to pin
> and running where it should not be (empty socket -> core turbo-ing?).
> The 4a / 4b mismatch is strange; are those using the very same system
> (tpr?) -- one of them reports higher load imbalance!
>
>
>
Yes all these jobs (1 to 8 cases) use same tpr.



> Overall, I suggest starting over and determining performance first by
> deciding: What DD setup is best and how to lay out jobs in a node to
> get best throughput. Start with run configs testing settings with
> -multi to avoid pinning headaches and fill at least half a node (or a
> full node) with #concurrent simulations >= #GPUs.
>

I will see if I get some node free. I need to wait.

Thanks for all responses.

-J


> Cheers,
> --
> Szilárd
>
>
> On Mon, Sep 18, 2017 at 9:25 PM, gromacs query 
> wrote:
> > Hi Szilárd,
> >
> > {I had to trim the message as my message is put on hold because only 50kb
> > allowed and this message has reached 58 KB! Not due to files attached as
> > they are shared via dropbox}; Sorry seamless reading might be compromised
> > for future readers.
> >
> > Thanks for your replies. I have shared log files here:
> >
> > https://www.dropbox.com/s/m9mqqans0jci873/test_logs.zip?dl=0
> >
> > Two self-describing name folders have all the test logs. The test_*.log
> > file serial numbers correspond to my simulations briefly described here
> > [with folder names].
> >
> > For quick look one can: grep Performance *.log
> >
> > Folder 2gpu_4np:
> > Sr. no.  Remarks  performance (ns/day)
> > 1.  only one job  345 ns/day
> > 2a,b.  two same jobs together (without pin on)  16.1 and 15.9
> > 3a,b.  two same jobs together (without pin on, with -multidir)  270 and
> 276
> > 4a,b.  two same jobs together (pin on, pinoffset at 0 and 5)  160 and 301
> >
> >
> >
> > Folder:4gpu_16np
> >
> >
> >
> >
> > Remarks  performance (ns/day)
> > 5.  only one job  694 ns/day
> > 6a,b.  two same jobs together (without pin on)  340 and 350
> > 7a

Re: [gmx-users] charmm 36 force field for DNA

2017-09-21 Thread Qinghua Liao

Hello J,

Thanks a lot for your tips, then I can do it manually in minutes. :-)


All the best,
Qinghua

On 09/21/2017 09:09 PM, gromacs query wrote:

Hi Qinghua,

I am  not sure about any too but If you have pdb working fine with AMBER
(as you said) then its not much of work doing it 'manually' ; takes less
than two minutes. You can do this:

1) First remove hydrogens and let gromacs add by itself at the final step
(matching from charm36). You can use 'sed' to remove pdb lines matching ' H'

2) Then from this pdb replace these 3 atom names  (using vi or any text
editor or sed)
Note: You may need to remove 5s and 3s from DA/DT/DC/DG

C7 to C5M (found in DT)

OP1 to O1P (in all bases)

OP2 to O2P (in all bases)

3) Now load this new PDB (Gromacs will add Hs). I hope it works.

-J

On Thu, Sep 21, 2017 at 5:23 PM, Qinghua Liao 
wrote:


Hello,

I want to simulate a DNA with CHARMM 36 force field, but I found that the
atom names in the pdb downloaded from the PDB data bank
do not match those in the CHARMM 36 force field. Is there a better tool to
edit it properly than modifying manually? Thanks a lot!

PS: They match well with Amber force field, I don't need to do any changes.


All the best,
Qinghua
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] conversion of a trajectory from .trr to .gro

2017-09-21 Thread Justin Lemkul



On 9/21/17 3:38 PM, gangotri dey wrote:

Dear Justin and Dash,

Thank you for the kind response.
I have another related problem. My xtc file is huge ~ 15 GB as I ran my
calculation for 5 ns in a big box. I can convert the .xtc to .gro but I
would like to do it in parallel using the cluster as it ran for a long time
when I used the head node.
However, when I use the below syntax and submit the job to the queue it is
asking for a group index that I do not know how to provide when using the
cluster to run the job.

gmx_mpi trjconv -s *.tpr -f *.xtc  -o conf.gro -sep

In this case, what should I do please?




http://www.gromacs.org/Documentation/How-tos/Using_Commands_in_Scripts

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] conversion of a trajectory from .trr to .gro

2017-09-21 Thread gangotri dey
Dear Justin and Dash,

Thank you for the kind response.
I have another related problem. My xtc file is huge ~ 15 GB as I ran my
calculation for 5 ns in a big box. I can convert the .xtc to .gro but I
would like to do it in parallel using the cluster as it ran for a long time
when I used the head node.
However, when I use the below syntax and submit the job to the queue it is
asking for a group index that I do not know how to provide when using the
cluster to run the job.

gmx_mpi trjconv -s *.tpr -f *.xtc  -o conf.gro -sep

In this case, what should I do please?


*Thank you*

*Gangotri *




On Mon, Sep 18, 2017 at 7:53 PM, Justin Lemkul  wrote:

>
>
> On 9/18/17 4:18 PM, gangotri dey wrote:
>
>> Hello!
>>
>> I would save it in vmd all at once but I would like to ideally save it in
>> separate files. Also, the trajconv statement did not work.
>>
>>
> Your original error will be solved by passing a .tpr file to trjconv -s,
> as suggested below.  If you want to save intervals of time into separate
> files, use -b and -e.
>
> -Justin
>
>
>> *Thank you*
>>
>> *Gangotri *
>>
>>
>>
>>
>>
>> On Mon, Sep 18, 2017 at 3:08 PM, R C Dash  wrote:
>>
>> Open you the gro file in VMD. Add load data and show the trr or xtc file.
>>> right click on to it and save coordinate. File type. gro.
>>> or
>>> trajconv -f xxx.trr or xtc -s xxx.tpr -o xxx.gro
>>>
>>> RC Dash,
>>>
>>>
>>> On Mon, Sep 18, 2017 at 2:42 PM, gangotri dey 
>>> wrote:
>>>
>>> Dear all,

 I would like to transform my trajectory file n.trr or n.xtc to n.gro
 after
 my production run. I have used trjcat and trjconv to transform it using
 the
 index file. But in both the cases, it says "Can not write a gro file
 without atom names". How can I transform it please?



 *Thank you*

 *Gangotri *
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/Support
 /Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.


>>>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>
> ==
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] charmm 36 force field for DNA

2017-09-21 Thread gromacs query
Hi Qinghua,

I am  not sure about any too but If you have pdb working fine with AMBER
(as you said) then its not much of work doing it 'manually' ; takes less
than two minutes. You can do this:

1) First remove hydrogens and let gromacs add by itself at the final step
(matching from charm36). You can use 'sed' to remove pdb lines matching ' H'

2) Then from this pdb replace these 3 atom names  (using vi or any text
editor or sed)
Note: You may need to remove 5s and 3s from DA/DT/DC/DG

C7 to C5M (found in DT)

OP1 to O1P (in all bases)

OP2 to O2P (in all bases)

3) Now load this new PDB (Gromacs will add Hs). I hope it works.

-J

On Thu, Sep 21, 2017 at 5:23 PM, Qinghua Liao 
wrote:

> Hello,
>
> I want to simulate a DNA with CHARMM 36 force field, but I found that the
> atom names in the pdb downloaded from the PDB data bank
> do not match those in the CHARMM 36 force field. Is there a better tool to
> edit it properly than modifying manually? Thanks a lot!
>
> PS: They match well with Amber force field, I don't need to do any changes.
>
>
> All the best,
> Qinghua
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] charmm 36 force field for DNA

2017-09-21 Thread Qinghua Liao

Hello,

I want to simulate a DNA with CHARMM 36 force field, but I found that 
the atom names in the pdb downloaded from the PDB data bank
do not match those in the CHARMM 36 force field. Is there a better tool 
to edit it properly than modifying manually? Thanks a lot!


PS: They match well with Amber force field, I don't need to do any changes.


All the best,
Qinghua
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Necessary files to restart/continue REMD

2017-09-21 Thread ABEL Stephane
Hello,

Two  quick questions about REMD 

1) What are the necessatry files to restart a REMD simulations with -append and 
-noappend arguments? I ask these question because if I do not provide a trr 
file, GROMACS crashs with a error related to the checksum of the trr

2) Can I use the argument -noappend with REMD simulations  ?

Thank you 

Stéphane


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Integrator sd and pull code possible.

2017-09-21 Thread Marvin Philipp Bernhardt

Hi all

I am calculating reversible forces in vacuum by constraining two 
molecules together using a pull constraint.


For small distances using the md-vv integrator I get repulsive forces 
that act on the constraint, however with the sd integrator the forces 
somehow only fluctuate around zero. Is there some restriction, that I 
can not use sd and pull code together?


Greetings,
Marvin
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance

2017-09-21 Thread Szilárd Páll
--
Szilárd


On Thu, Sep 21, 2017 at 5:54 PM, Szilárd Páll  wrote:
> Hi,
>
> A few remarks in no particular order:
>
> 1. Avoid domain-decomposition unless necessary (especially in
> CPU-bound runs, and especially with PME), it has a non-negligible
> overhead (greatest when going from no DD to using DD). Running
> multi-threading only typically has better performance. There are
> exceptions (e.g. your case of reaction-field runs could be such a
> case, but I'm doubtful as the DD cost is signiificant). Hence, I
> suggest trying 1, 2, 4... ranks per simulation, i.e.
> mpirun -np 1 gmx mdrun -ntomp N (single-run)
> mpirun -np 2 gmx mdrun -ntomp N/2 (single-run)
> mpirun -np 4 gmx mdrun -ntomp N/4 (single-run)
> [...]
> The multi-run equivalents of the above would simply use M ranks where
> M=Nmulti * Nranks_per_run.
>
> 2. If you're aiming for best throughput place two or more
> _independent_ runs on the same GPU, e.g. assuming 4 GPUs + 40 cores
> (and that no DD turns out to be best) to run 2 sim/GPU you can do:
> mpirun -np 8 -multi 8 gmx mdrun [-ntomp 5] [-gpu_id 00112233]
> The last two args can be omitted, but you should make sure that's what
> you get, i.e. that sim #0/#1 use GPU #0, sim #2/#3 use GPU#1, etc.

See Fig 5 of http://arxiv.org/abs/1507.00898 if you're not convinced.

> 3. 2a,b are clearly off, my hypothesis is still that they get pinned
> to the wrong cores. I suspect 6a,b are just lucky and happen to not be
> placed too badly. Plus 6 use 4 GPUs vs 7 only 2 GPUs, so that's not a
> fair comparison (and probably explains the 350 vs 300 ns/day).
>
> 4. -pin on is faster than letting the scheduler place jobs (e.g. 3ab
> vs 4b) which is in line with what I would expect.
>
> 5. The strange asymmetry in 8a vs 8b is due to 8b having failed to pin
> and running where it should not be (empty socket -> core turbo-ing?).
> The 4a / 4b mismatch is strange; are those using the very same system
> (tpr?) -- one of them reports higher load imbalance!
>
>
> Overall, I suggest starting over and determining performance first by
> deciding: What DD setup is best and how to lay out jobs in a node to
> get best throughput. Start with run configs testing settings with
> -multi to avoid pinning headaches and fill at least half a node (or a
> full node) with #concurrent simulations >= #GPUs.
>
> Cheers,
> --
> Szilárd
>
>
> On Mon, Sep 18, 2017 at 9:25 PM, gromacs query  wrote:
>> Hi Szilárd,
>>
>> {I had to trim the message as my message is put on hold because only 50kb
>> allowed and this message has reached 58 KB! Not due to files attached as
>> they are shared via dropbox}; Sorry seamless reading might be compromised
>> for future readers.
>>
>> Thanks for your replies. I have shared log files here:
>>
>> https://www.dropbox.com/s/m9mqqans0jci873/test_logs.zip?dl=0
>>
>> Two self-describing name folders have all the test logs. The test_*.log
>> file serial numbers correspond to my simulations briefly described here
>> [with folder names].
>>
>> For quick look one can: grep Performance *.log
>>
>> Folder 2gpu_4np:
>> Sr. no.  Remarks  performance (ns/day)
>> 1.  only one job  345 ns/day
>> 2a,b.  two same jobs together (without pin on)  16.1 and 15.9
>> 3a,b.  two same jobs together (without pin on, with -multidir)  270 and 276
>> 4a,b.  two same jobs together (pin on, pinoffset at 0 and 5)  160 and 301
>>
>>
>>
>> Folder:4gpu_16np
>>
>>
>>
>>
>> Remarks  performance (ns/day)
>> 5.  only one job  694 ns/day
>> 6a,b.  two same jobs together (without pin on)  340 and 350
>> 7a,b.  two same jobs together (without pin on, with -multidir)  302 and 304
>> 8a,b.  two same jobs together (pin on, pinoffset at 0 and 17)  204 and 546
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] performance

2017-09-21 Thread Szilárd Páll
Hi,

A few remarks in no particular order:

1. Avoid domain-decomposition unless necessary (especially in
CPU-bound runs, and especially with PME), it has a non-negligible
overhead (greatest when going from no DD to using DD). Running
multi-threading only typically has better performance. There are
exceptions (e.g. your case of reaction-field runs could be such a
case, but I'm doubtful as the DD cost is signiificant). Hence, I
suggest trying 1, 2, 4... ranks per simulation, i.e.
mpirun -np 1 gmx mdrun -ntomp N (single-run)
mpirun -np 2 gmx mdrun -ntomp N/2 (single-run)
mpirun -np 4 gmx mdrun -ntomp N/4 (single-run)
[...]
The multi-run equivalents of the above would simply use M ranks where
M=Nmulti * Nranks_per_run.

2. If you're aiming for best throughput place two or more
_independent_ runs on the same GPU, e.g. assuming 4 GPUs + 40 cores
(and that no DD turns out to be best) to run 2 sim/GPU you can do:
mpirun -np 8 -multi 8 gmx mdrun [-ntomp 5] [-gpu_id 00112233]
The last two args can be omitted, but you should make sure that's what
you get, i.e. that sim #0/#1 use GPU #0, sim #2/#3 use GPU#1, etc.

3. 2a,b are clearly off, my hypothesis is still that they get pinned
to the wrong cores. I suspect 6a,b are just lucky and happen to not be
placed too badly. Plus 6 use 4 GPUs vs 7 only 2 GPUs, so that's not a
fair comparison (and probably explains the 350 vs 300 ns/day).

4. -pin on is faster than letting the scheduler place jobs (e.g. 3ab
vs 4b) which is in line with what I would expect.

5. The strange asymmetry in 8a vs 8b is due to 8b having failed to pin
and running where it should not be (empty socket -> core turbo-ing?).
The 4a / 4b mismatch is strange; are those using the very same system
(tpr?) -- one of them reports higher load imbalance!


Overall, I suggest starting over and determining performance first by
deciding: What DD setup is best and how to lay out jobs in a node to
get best throughput. Start with run configs testing settings with
-multi to avoid pinning headaches and fill at least half a node (or a
full node) with #concurrent simulations >= #GPUs.

Cheers,
--
Szilárd


On Mon, Sep 18, 2017 at 9:25 PM, gromacs query  wrote:
> Hi Szilárd,
>
> {I had to trim the message as my message is put on hold because only 50kb
> allowed and this message has reached 58 KB! Not due to files attached as
> they are shared via dropbox}; Sorry seamless reading might be compromised
> for future readers.
>
> Thanks for your replies. I have shared log files here:
>
> https://www.dropbox.com/s/m9mqqans0jci873/test_logs.zip?dl=0
>
> Two self-describing name folders have all the test logs. The test_*.log
> file serial numbers correspond to my simulations briefly described here
> [with folder names].
>
> For quick look one can: grep Performance *.log
>
> Folder 2gpu_4np:
> Sr. no.  Remarks  performance (ns/day)
> 1.  only one job  345 ns/day
> 2a,b.  two same jobs together (without pin on)  16.1 and 15.9
> 3a,b.  two same jobs together (without pin on, with -multidir)  270 and 276
> 4a,b.  two same jobs together (pin on, pinoffset at 0 and 5)  160 and 301
>
>
>
> Folder:4gpu_16np
>
>
>
>
> Remarks  performance (ns/day)
> 5.  only one job  694 ns/day
> 6a,b.  two same jobs together (without pin on)  340 and 350
> 7a,b.  two same jobs together (without pin on, with -multidir)  302 and 304
> 8a,b.  two same jobs together (pin on, pinoffset at 0 and 17)  204 and 546
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulation of inorganic compounds PbI2 (Lead Iodine)

2017-09-21 Thread Justin Lemkul



On 9/20/17 11:55 AM, Yanke Peng wrote:

Hi to all,

I am trying to simulated the crystallization process of PbI2 in DMF.

I was wandering if you could tell me whether Gromacs is appropriate to
simulate this progress.



It's certainly possible.  The greater challenge is finding suitable force field 
parameters.  GROMACS has a powerful MD engine and a flexible force field format, 
so if you have parameters for something, you can simulate it.



I have tried some crystallization process by Gromacs, such as NaCl and Urea
aqueous, but I encountered a lot of problems that I am still dealing with.



If you have questions, that's why this mailing list exists :)

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] extended calculation pdb extracttion query

2017-09-21 Thread Justin Lemkul



On 9/20/17 8:39 AM, Deep kumar wrote:

Hi All,

I have run an extended simulation for 90ns like this: (my previous run was
for 10ns)

grompp -f new.mdp -c old.tpr -o new.tpr
mdrun -s new.tpr -cpi old.cpt


because, I had to make changes in the .mdp file. I have got new files
from the new extended run;


new.tpr, new.xtc  etc


My 10ns mdrun results are like old.tpr, old.xtc   ...etc

Now, I want to get the pdb files from the runs (10ns + 90ns). Can you
please let me know how can I do it? I know to get the pdb files from mdrun
I should do this:

gmx trjconv -s new.tpr -f new.xtc -dt 100 -o trj.pdb

but I want to combine both 10ns and 90ns and get the pdb files.



This is what trjcat does.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Molecule leave from the simulation box

2017-09-21 Thread Wes Barnett
On Wed, Sep 20, 2017 at 11:53 PM, Sameer Edirisinghe 
wrote:

> Dear Users,
>
> I have done simulation for polymer system which is having 20 molecules. But
> after nvt equilibrium 2 molecules leave the simulation box. Is this
> affecting to my production run or analysis?
>
> Is it possible to re-center all molecules?
>
>
http://www.gromacs.org/Documentation/Terminology/Periodic_Boundary_Conditions


-- 
James "Wes" Barnett
Postdoctoral Research Scientist
Department of Chemical Engineering
Kumar Research Group 
Columbia University
w.barn...@columbia.edu
http://wbarnett.us
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: extended calculation pdb extracttion query

2017-09-21 Thread Deep kumar
Dear ALL,

I have run an extended simulation for 90ns like this: (my previous run was
for 10ns)

grompp -f new.mdp -c old.tpr -o new.tpr
mdrun -s new.tpr -cpi old.cpt


because, I had to make changes in the .mdp file. I have got new files
from the new extended run;


new.tpr, new.xtc  etc


My 10ns mdrun results are like old.tpr, old.xtc   ...etc

Now, I want to get the pdb files from the runs (10ns + 90ns). Can you
please let me know how can I do it? I know to get the pdb files from mdrun
I should do this:

gmx trjconv -s new.tpr -f new.xtc -dt 100 -o trj.pdb

but I want to combine both 10ns and 90ns and get the pdb files.

Thanks,
DK
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] autocorrelation function and residence time

2017-09-21 Thread Erik Marklund
Dear Tasneem,

Quite often ACF calculations involve subtraction of the average signal, and 
this normally renders some negative values in the ACF. It’s been a bit too long 
since I dealt with the gmx hbond code, but I suspect that is what is going on 
here. I suggest reading the references that gmx hbond mentions, where the four 
quantities in the output are defined.

Kind regards,
Erik
__
Erik Marklund, PhD, Marie Skłodowska Curie INCA Fellow
Department of Chemistry – BMC, Uppsala University
+46 (0)18 471 4539
erik.markl...@kemi.uu.se

On 21 Sep 2017, at 11:12, Tasneem Kausar 
mailto:tasneemkausa...@gmail.com>> wrote:

Still waiting for suggestions.

On Wed, Sep 20, 2017 at 9:42 AM, Tasneem Kausar 
mailto:tasneemkausa...@gmail.com>>
wrote:

Dear all

I want to calculate residence time of interface water molecules at protein
interface. I am using Gromacs-4.6.4.
I am using using following command
g_hbond -s protein.tpr -f protein.xtc -b 2000 -n proein.ndx -ac
protein_ac.xvg -contact
In the index file there are protein interface residues and  8 water
molecule that are present at the protein interface. I have selected protein
interface and 8 water for calculation. In the output of autocorrelation
there are four y axis columns. I came through reply of Erik. He mentioned
the effect of periodic boundary condition on the output. In my case first y
axis has several negative values. I want to do an exponential fit with
function y=exp(-(x/t)^n) to obtain value of t and b. Can we skip the
negative values of the output file? If yes what is the reason to do that.
If I am wrong please suggest me to do the right way to obtain residence
time.


Thanks in Advance

Tasneem Kausar



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] autocorrelation function and residence time

2017-09-21 Thread Tasneem Kausar
Still waiting for suggestions.

On Wed, Sep 20, 2017 at 9:42 AM, Tasneem Kausar 
wrote:

> Dear all
>
> I want to calculate residence time of interface water molecules at protein
> interface. I am using Gromacs-4.6.4.
>  I am using using following command
> g_hbond -s protein.tpr -f protein.xtc -b 2000 -n proein.ndx -ac
> protein_ac.xvg -contact
> In the index file there are protein interface residues and  8 water
> molecule that are present at the protein interface. I have selected protein
> interface and 8 water for calculation. In the output of autocorrelation
> there are four y axis columns. I came through reply of Erik. He mentioned
> the effect of periodic boundary condition on the output. In my case first y
> axis has several negative values. I want to do an exponential fit with
> function y=exp(-(x/t)^n) to obtain value of t and b. Can we skip the
> negative values of the output file? If yes what is the reason to do that.
> If I am wrong please suggest me to do the right way to obtain residence
> time.
>
>
> Thanks in Advance
>
> Tasneem Kausar
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] NPT or NPgT ensemble simulation for shear deformations

2017-09-21 Thread Own 12121325
Dear Gromacs users!

I wonder to ask regarding mdp setups for the simulation of membrane
bilayers with the introduction of DEFORM (non-equilibrium MD) with the aim
to simulate a shearing of the water.

1 -  assuming that I'm doing a simulation using semi-isotropic coupling,
does the switching of the xy compressibility to zero automatically means
switching to NPgT (constant surface tension) ensemble without any further
modifications in mdp file?

2 - which of the ensemble should produce more realistic behaviour of the
system with the application of the DEFORM along c(x) or c(y) directions?

Thank you!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.