Re: [gmx-users] conversion of a trajectory from .trr to .gro

2017-09-18 Thread Justin Lemkul



On 9/18/17 4:18 PM, gangotri dey wrote:

Hello!

I would save it in vmd all at once but I would like to ideally save it in
separate files. Also, the trajconv statement did not work.



Your original error will be solved by passing a .tpr file to trjconv -s, 
as suggested below.  If you want to save intervals of time into separate 
files, use -b and -e.


-Justin



*Thank you*

*Gangotri *





On Mon, Sep 18, 2017 at 3:08 PM, R C Dash  wrote:


Open you the gro file in VMD. Add load data and show the trr or xtc file.
right click on to it and save coordinate. File type. gro.
or
trajconv -f xxx.trr or xtc -s xxx.tpr -o xxx.gro

RC Dash,


On Mon, Sep 18, 2017 at 2:42 PM, gangotri dey 
wrote:


Dear all,

I would like to transform my trajectory file n.trr or n.xtc to n.gro after
my production run. I have used trjcat and trjconv to transform it using
the
index file. But in both the cases, it says "Can not write a gro file
without atom names". How can I transform it please?



*Thank you*

*Gangotri *
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.





--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GPU-Note

2017-09-18 Thread Szilárd Páll
First off, don't change the cutoff without being sure of what you're doing!

There is one thing you can try: lowering nstlist, the neighbor search
frequency; this is a free parameter and it is picked heuristically. A
larger value will reduce the cost of search but as it will increase
the Verlet buffer and with it increase the non-bonded force compute
cost. Therefore, a smaller nstlist value (e.g. 20 instead of 40) will
about double the search cost, but you should see a reduction in force
compute time. However, expect modest performance benefits.


--
Szilárd


On Mon, Sep 18, 2017 at 9:11 PM, RAHUL SURESH  wrote:
> Is there something that I can do without reducing the cut off..?
>
> On Mon, 18 Sep 2017 at 10:27 PM, Szilárd Páll 
> wrote:
>
>> It means that there is a certain amount of time the CPU and GPU can
>> work concurrently to compute forces after which the CPU waits for the
>> results from the GPU to do the integration. If the CPU finishes a lot
>> sooner than the GPU, the run will be GPU performance-bound (and
>> vice-versa) -- which is what happens here: the force compute on the
>> GPU takes longer than on CPU and as this exceeds >20%, mdrun notes is
>> so you can consider doing something about it if you can/want to.
>>
>> --
>> Szilárd
>>
>>
>> On Mon, Sep 18, 2017 at 6:39 PM, RAHUL SURESH 
>> wrote:
>> > I receive the following note while doing NVT & NPT.
>> >
>> >
>> > What does it exactly mean?
>> >
>> > I am using charmm36 ff and as per documentation it is necessary to ahve
>> cut
>> > off as 1.2. How can I overcome this note.?
>> >
>> > NOTE: The GPU has >20% more load than the CPU. This imbalance causes
>> >   performance loss, consider using a shorter cut-off and a finer PME
>> > grid.
>> >
>> > --
>> > *Regards,*
>> > *Rahul Suresh*
>> > *Research Scholar*
>> > *Bharathiar University*
>> > *Coimbatore*
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> *Regards,*
> *Rahul Suresh*
> *Research Scholar*
> *Bharathiar University*
> *Coimbatore*
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] conversion of a trajectory from .trr to .gro

2017-09-18 Thread gangotri dey
Hello!

I would save it in vmd all at once but I would like to ideally save it in
separate files. Also, the trajconv statement did not work.



*Thank you*

*Gangotri *





On Mon, Sep 18, 2017 at 3:08 PM, R C Dash  wrote:

> Open you the gro file in VMD. Add load data and show the trr or xtc file.
> right click on to it and save coordinate. File type. gro.
> or
> trajconv -f xxx.trr or xtc -s xxx.tpr -o xxx.gro
>
> RC Dash,
>
>
> On Mon, Sep 18, 2017 at 2:42 PM, gangotri dey 
> wrote:
>
>> Dear all,
>>
>> I would like to transform my trajectory file n.trr or n.xtc to n.gro after
>> my production run. I have used trjcat and trjconv to transform it using
>> the
>> index file. But in both the cases, it says "Can not write a gro file
>> without atom names". How can I transform it please?
>>
>>
>>
>> *Thank you*
>>
>> *Gangotri *
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance

2017-09-18 Thread gromacs query
Hi Szilárd,

{I had to trim the message as my message is put on hold because only 50kb
allowed and this message has reached 58 KB! Not due to files attached as
they are shared via dropbox}; Sorry seamless reading might be compromised
for future readers.

Thanks for your replies. I have shared log files here:

https://www.dropbox.com/s/m9mqqans0jci873/test_logs.zip?dl=0

Two self-describing name folders have all the test logs. The test_*.log
file serial numbers correspond to my simulations briefly described here
[with folder names].

For quick look one can: grep Performance *.log

Folder 2gpu_4np:
Sr. no.  Remarks  performance (ns/day)
1.  only one job  345 ns/day
2a,b.  two same jobs together (without pin on)  16.1 and 15.9
3a,b.  two same jobs together (without pin on, with -multidir)  270 and 276
4a,b.  two same jobs together (pin on, pinoffset at 0 and 5)  160 and 301



Folder:4gpu_16np




Remarks  performance (ns/day)
5.  only one job  694 ns/day
6a,b.  two same jobs together (without pin on)  340 and 350
7a,b.  two same jobs together (without pin on, with -multidir)  302 and 304
8a,b.  two same jobs together (pin on, pinoffset at 0 and 17)  204 and 546
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GPU-Note

2017-09-18 Thread RAHUL SURESH
Is there something that I can do without reducing the cut off..?

On Mon, 18 Sep 2017 at 10:27 PM, Szilárd Páll 
wrote:

> It means that there is a certain amount of time the CPU and GPU can
> work concurrently to compute forces after which the CPU waits for the
> results from the GPU to do the integration. If the CPU finishes a lot
> sooner than the GPU, the run will be GPU performance-bound (and
> vice-versa) -- which is what happens here: the force compute on the
> GPU takes longer than on CPU and as this exceeds >20%, mdrun notes is
> so you can consider doing something about it if you can/want to.
>
> --
> Szilárd
>
>
> On Mon, Sep 18, 2017 at 6:39 PM, RAHUL SURESH 
> wrote:
> > I receive the following note while doing NVT & NPT.
> >
> >
> > What does it exactly mean?
> >
> > I am using charmm36 ff and as per documentation it is necessary to ahve
> cut
> > off as 1.2. How can I overcome this note.?
> >
> > NOTE: The GPU has >20% more load than the CPU. This imbalance causes
> >   performance loss, consider using a shorter cut-off and a finer PME
> > grid.
> >
> > --
> > *Regards,*
> > *Rahul Suresh*
> > *Research Scholar*
> > *Bharathiar University*
> > *Coimbatore*
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.

-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Doubt about g_dos

2017-09-18 Thread Varvdekar Bhagyesh Rajendra
Dear all,

I am attempting to find the Vibrational density of states of protein bound to 
ligand in a water box using the command g_dos. To do so, should I extract the 
protein part from the entire trajectory file containing coordinates and 
velocities of all atoms (trr file) and then run g_dos on the resultant partial 
trajectory containing only protein coordinates and velocities ? or the entire 
trajectory which of course doesn't make sense ?


Thank you,

Bhagyesh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] conversion of a trajectory from .trr to .gro

2017-09-18 Thread gangotri dey
Dear all,

I would like to transform my trajectory file n.trr or n.xtc to n.gro after
my production run. I have used trjcat and trjconv to transform it using the
index file. But in both the cases, it says "Can not write a gro file
without atom names". How can I transform it please?



*Thank you*

*Gangotri *
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU-Note

2017-09-18 Thread Szilárd Páll
It means that there is a certain amount of time the CPU and GPU can
work concurrently to compute forces after which the CPU waits for the
results from the GPU to do the integration. If the CPU finishes a lot
sooner than the GPU, the run will be GPU performance-bound (and
vice-versa) -- which is what happens here: the force compute on the
GPU takes longer than on CPU and as this exceeds >20%, mdrun notes is
so you can consider doing something about it if you can/want to.

--
Szilárd


On Mon, Sep 18, 2017 at 6:39 PM, RAHUL SURESH  wrote:
> I receive the following note while doing NVT & NPT.
>
>
> What does it exactly mean?
>
> I am using charmm36 ff and as per documentation it is necessary to ahve cut
> off as 1.2. How can I overcome this note.?
>
> NOTE: The GPU has >20% more load than the CPU. This imbalance causes
>   performance loss, consider using a shorter cut-off and a finer PME
> grid.
>
> --
> *Regards,*
> *Rahul Suresh*
> *Research Scholar*
> *Bharathiar University*
> *Coimbatore*
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] GPU-Note

2017-09-18 Thread RAHUL SURESH
I receive the following note while doing NVT & NPT.


What does it exactly mean?

I am using charmm36 ff and as per documentation it is necessary to ahve cut
off as 1.2. How can I overcome this note.?

NOTE: The GPU has >20% more load than the CPU. This imbalance causes
  performance loss, consider using a shorter cut-off and a finer PME
grid.

-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance

2017-09-18 Thread Szilárd Páll
On Fri, Sep 15, 2017 at 1:06 AM, gromacs query  wrote:
> Hi Szilárd,
>
> Sorry this discussion is going long.
> Finally I got one node empty and did some serious tests specially
> considering your first point (discrepancies in benchmarking comparing jobs
> running on empty node vs occupied node). I tested in both ways.
>
> I ran following cases (single job vs two jobs for 2GPU+4 procs and also for
> 4GPU+16 procs). Happy to send log files.

Please do share them, it's hard to assess what's going on without those.

> Pinoffset results are surprising (4th and 8th test case below) though I get
> in log file a WARNING: Requested offset too large for available cores for
> the case 8; [should not be an issue as the first job binds the cores]

That means the offsets are not set correctly.

> As suggested defining affinity should help with pinoffset set 'manually'
> (in practice with script) but these results are quite variable. Am bit lost
> now, what should be the best practice in case nodes are shared among
> different users and multidir can be tricky in such case (if other gromacs
> users are not using multidir option!).

I suggest fixing the above issue first. I don't fully understand what
the below descriptions mean, please be more specific about the details
or share logs.

>
> Sr. no. each job 2GPU; 4 procs performance (ns/day)
> 1 only one job 345
> 2 two same jobs together (without pin on) 16.1 and 15.9
> 3 two same jobs together (without pin on, with -multidir) 178 and 191
> 4 two same jobs together (pin on, pinoffset at 0 and 5) 160 and 301
> each job 4GPU; 16 procs performance (ns/day)
> 5 only one job 694
> 6 two same jobs together (without pin on) 340 and 350
> 7 two same jobs together (without pin on, with -multidir) 346 and 344
> 8 two same jobs together (pin on, pinoffset at 0 and 17) 204 and 546
>
>
> On Thu, Sep 14, 2017 at 12:02 PM, gromacs query 
> wrote:
>
>> Hi Szilárd,
>>
>> Here are my replies:
>>
>> >> Did you run the "fast" single job on an otherwise empty node? That
>> might explain it as, when most of the CPU cores are left empty, modern CPUs
>> increase clocks (tubo boost) on the used cores higher than they could with
>> all cores busy.
>>
>> Yes the "fast" single job was on empty node. Sorry I don't get it when you
>> say 'modern CPUs increase clocks', you mean the ns/day I get is pseudo in
>> that case?
>>
>> >> and if you post an actual log I can certainly give more informed
>> comments
>>
>> Sure, if its ok can I post it off-mailing list to you?
>>
>> >> However, note that if you are sharing a node with others, if their jobs
>> are not correctly affinitized, those processes will affect the performance
>> of your job.
>>
>> Yes exactly. In this case I would need to manually set pinoffset but this
>> can be but frustrating if other Gromacs users are not binding :)
>> Would it be possible to fix this in the default algorithm, though am
>> unaware of other issues it might cause? Also mutidir is not convenient
>> sometimes when job crashes in the middle and automatic restart from cpt
>> file would be difficult.
>>
>> -J
>>
>>
>> On Thu, Sep 14, 2017 at 11:26 AM, Szilárd Páll 
>> wrote:
>>
>>> On Wed, Sep 13, 2017 at 11:14 PM, gromacs query 
>>> wrote:
>>> > Hi Szilárd,
>>> >
>>> > Thanks again. I tried now with -multidir like this:
>>> >
>>> > mpirun -np 16 gmx_mpi mdrun -s test -ntomp 2 -maxh 0.1 -multidir t1 t2
>>> t3 t4
>>> >
>>> > So this runs 4 jobs on same node so for each job np is = 16/4, and each
>>> job
>>> > using 2 GPU. I get now quite improved performance and equal performance
>>> for
>>> > each job (~ 220 ns) though still slightly less than single independent
>>> job
>>> > (where I get 300 ns). I can live with that but -
>>>
>>> That is not normal and it is more likely to be a benchmarking
>>> discrepancy: you are likely not comparing apples to apples. Did you
>>> run the "fast" single job on an otherwise empty node? That might
>>> explain it as, when most of the CPU cores are left empty, modern CPUs
>>> increase clocks (tubo boost) on the used cores higher than they could
>>> with all cores busy.
>>>
>>> > Surprised: There are maximum 40 cores and 8 GPUs per node and thus my 4
>>> > jobs should consume 8 GPUS.
>>>
>>> Note that even if those are 40 real cores (rather than 20 core with
>>> HyperThreading), the current GROMACS release will be unlikely to run
>>> efficiently with at least 6-8 cores per GPU. This will likely change
>>> with the next release.
>>>
>>> > So I am bit surprised with the fact the same
>>> > node on which my four jobs were running was already occupied with jobs
>>> by
>>> > some other user, which I think should not happen (may be slurm.config
>>> admin
>>> > issue?). Either my some jobs should have gone in queue or run on other
>>> node
>>> > if free.
>>>
>>> Sounds like a job scheduler issue (you can always check in the log the
>>> detected hardware) -- 

[gmx-users] the importance of process/thread affinity, especially in node sharing setups [fork of Re: performance]

2017-09-18 Thread Szilárd Páll
>>> However, note that if you are sharing a node with others, if their jobs
> are not correctly affinitized, those processes will affect the performance
> of your job.
>
> Yes exactly. In this case I would need to manually set pinoffset but this
> can be but frustrating if other Gromacs users are not binding :)
> Would it be possible to fix this in the default algorithm, though am
> unaware of other issues it might cause? Also mutidir is not convenient
> sometimes when job crashes in the middle and automatic restart from cpt
> file would be difficult.

Let me be very explicit and clear about this to avoid misunderstandings:

This is *not a problem* in GROMACS, but rather a property of any
modern multicore system: you either set the right affinities for the
use-case (considering workload, node utilization, hardware locality,
scaling concerns) or otherwise the effect job/process/thread locality
(in terms of where it runs/its data located in a node) will be a
matter of luck and up to the operating system and will rarely be
optimal.

While mdrun tries to help users obtain good and consistent performance
by either setting (when it can assume that it runs on the full node)
or helping to set affinities, ultimately it is the responsibility of
the users/job schedulers to get job placement right -- especially in
node sharing setups. At job allocation the job scheduler should know
which resources (cores, memory GPUs) the user is allocated and it
should place and affinitize jobs accordingly -- which mdrun does
respect.


A bit more technical detail for the curious: data resides in different
levels of memories (from global memory to L3, L2, and L1 caches) and
if a job e.g. starts running on cores 0-3, working sets of data will
be "pulled in" into the private and shared caches closest to these
cores. If the job is not affinitized, e.g. two threads running on
cores 0 and 1 could end up moved to, say, cores 9-10. As a results
these two unlucky threads will "loose" their private caches or even
worse, if cores 9-10 are on the second socket, they will also loose
shared cache and the ability to do fast data sharing with the two
other threads of the same mdrun run. For this reason, if your job is
meant to run on four cores cores 0-3, its process affinity mask should
be set accordingly to prevent its threads from migrating to other
cores.
Note that this is a simplified example specific to a use-case that can
hurt the performance of GROMACS runs. Different affinity patterns will
be optimal for other types of compute workloads.

Cheers,
--
Szilárd



On Thu, Sep 14, 2017 at 1:02 PM, gromacs query  wrote:
> Hi Szilárd,
>
> Here are my replies:
>
>>> Did you run the "fast" single job on an otherwise empty node? That might
> explain it as, when most of the CPU cores are left empty, modern CPUs
> increase clocks (tubo boost) on the used cores higher than they could with
> all cores busy.
>
> Yes the "fast" single job was on empty node. Sorry I don't get it when you
> say 'modern CPUs increase clocks', you mean the ns/day I get is pseudo in
> that case?
>
>>> and if you post an actual log I can certainly give more informed comments
>
> Sure, if its ok can I post it off-mailing list to you?
>
>>> However, note that if you are sharing a node with others, if their jobs
> are not correctly affinitized, those processes will affect the performance
> of your job.
>
> Yes exactly. In this case I would need to manually set pinoffset but this
> can be but frustrating if other Gromacs users are not binding :)
> Would it be possible to fix this in the default algorithm, though am
> unaware of other issues it might cause? Also mutidir is not convenient
> sometimes when job crashes in the middle and automatic restart from cpt
> file would be difficult.
>
> -J
>
>
> On Thu, Sep 14, 2017 at 11:26 AM, Szilárd Páll 
> wrote:
>
>> On Wed, Sep 13, 2017 at 11:14 PM, gromacs query 
>> wrote:
>> > Hi Szilárd,
>> >
>> > Thanks again. I tried now with -multidir like this:
>> >
>> > mpirun -np 16 gmx_mpi mdrun -s test -ntomp 2 -maxh 0.1 -multidir t1 t2
>> t3 t4
>> >
>> > So this runs 4 jobs on same node so for each job np is = 16/4, and each
>> job
>> > using 2 GPU. I get now quite improved performance and equal performance
>> for
>> > each job (~ 220 ns) though still slightly less than single independent
>> job
>> > (where I get 300 ns). I can live with that but -
>>
>> That is not normal and it is more likely to be a benchmarking
>> discrepancy: you are likely not comparing apples to apples. Did you
>> run the "fast" single job on an otherwise empty node? That might
>> explain it as, when most of the CPU cores are left empty, modern CPUs
>> increase clocks (tubo boost) on the used cores higher than they could
>> with all cores busy.
>>
>> > Surprised: There are maximum 40 cores and 8 GPUs per node and thus my 4
>> > jobs should consume 8 GPUS.
>>
>> Note that even if those are 40 real 

Re: [gmx-users] performance

2017-09-18 Thread Szilárd Páll
On Thu, Sep 14, 2017 at 1:02 PM, gromacs query  wrote:
> Hi Szilárd,
>
> Here are my replies:
>
>>> Did you run the "fast" single job on an otherwise empty node? That might
> explain it as, when most of the CPU cores are left empty, modern CPUs
> increase clocks (tubo boost) on the used cores higher than they could with
> all cores busy.
>
> Yes the "fast" single job was on empty node. Sorry I don't get it when you
> say 'modern CPUs increase clocks', you mean the ns/day I get is pseudo in
> that case?

It's called DVFS or Turbo Boost on Intel. Here are some pointers:
https://en.wikipedia.org/wiki/Dynamic_frequency_scaling
https://en.wikipedia.org/wiki/Intel_Turbo_Boost

>>> and if you post an actual log I can certainly give more informed comments
>
> Sure, if its ok can I post it off-mailing list to you?

Please use an online file sharing service of your linking so everyone
has access to the information referred to here.

>>> However, note that if you are sharing a node with others, if their jobs
> are not correctly affinitized, those processes will affect the performance
> of your job.
>
> Yes exactly. In this case I would need to manually set pinoffset but this
> can be but frustrating if other Gromacs users are not binding :)
> Would it be possible to fix this in the default algorithm, though am
> unaware of other issues it might cause?

No, there is no issue on the GROMACS-side to fix. This is an issue
that the jobs scheduler/you as user needs to deal with to avoid the
pitfalls and performance-cliff inherent to node-sharing.

> Also mutidir is not convenient
> sometimes when job crashes in the middle and automatic restart from cpt
> file would be difficult.

Let me answer that separately to emphasize a few technical issues.

Cheers,
--
Szilárd

> -J
>
>
> On Thu, Sep 14, 2017 at 11:26 AM, Szilárd Páll 
> wrote:
>
>> On Wed, Sep 13, 2017 at 11:14 PM, gromacs query 
>> wrote:
>> > Hi Szilárd,
>> >
>> > Thanks again. I tried now with -multidir like this:
>> >
>> > mpirun -np 16 gmx_mpi mdrun -s test -ntomp 2 -maxh 0.1 -multidir t1 t2
>> t3 t4
>> >
>> > So this runs 4 jobs on same node so for each job np is = 16/4, and each
>> job
>> > using 2 GPU. I get now quite improved performance and equal performance
>> for
>> > each job (~ 220 ns) though still slightly less than single independent
>> job
>> > (where I get 300 ns). I can live with that but -
>>
>> That is not normal and it is more likely to be a benchmarking
>> discrepancy: you are likely not comparing apples to apples. Did you
>> run the "fast" single job on an otherwise empty node? That might
>> explain it as, when most of the CPU cores are left empty, modern CPUs
>> increase clocks (tubo boost) on the used cores higher than they could
>> with all cores busy.
>>
>> > Surprised: There are maximum 40 cores and 8 GPUs per node and thus my 4
>> > jobs should consume 8 GPUS.
>>
>> Note that even if those are 40 real cores (rather than 20 core with
>> HyperThreading), the current GROMACS release will be unlikely to run
>> efficiently with at least 6-8 cores per GPU. This will likely change
>> with the next release.
>>
>> > So I am bit surprised with the fact the same
>> > node on which my four jobs were running was already occupied with jobs by
>> > some other user, which I think should not happen (may be slurm.config
>> admin
>> > issue?). Either my some jobs should have gone in queue or run on other
>> node
>> > if free.
>>
>> Sounds like a job scheduler issue (you can always check in the log the
>> detected hardware) -- and if you post an actual log I can certainly
>> give more informed comments.
>>
>> > What to do: Importantly though as an individual user I can submit
>> -multidir
>> > job but lets say, which is normally the case, there will be many other
>> > unknown users who submit one or two jobs in that case performance will be
>> > an issue (which is equivalent to my case when I submit many jobs without
>> > -multi/multidir).
>>
>> Not sure I follow: if you always have a number of similar runs to do,
>> submit them together and benefit from not having to manual hardware
>> assignment. Otherwise, if your cluster relies on node sharing, you
>> will have to make sure that you specify correctly the affinity/binding
>> arguments to your job scheduler (or work around it with manual offset
>> calculation). However, note that if you are sharing a node with
>> others, if their jobs are not correctly affinitized, those processes
>> will affect the performance of your job.
>>
>> > I think still they will need -pinoffset. Could you
>> > please suggest what best can be done in such case?
>>
>> See above.
>>
>> Cheers,
>> --
>> Szilárd
>>
>> >
>> > -Jiom
>> >
>> >
>> >
>> >
>> > On Wed, Sep 13, 2017 at 9:15 PM, Szilárd Páll 
>> > wrote:
>> >
>> >> Hi,
>> >>
>> >> First off, have you considered options 2) using multi-sim? That would
>> >> allow you to not have to 

Re: [gmx-users] mpirun noticed that process rank 7 with PID 19160 on node compute-0-28.local exited on signal 11 (Segmentation fault).

2017-09-18 Thread Mark Abraham
Hi,

You're not compiling properly for your cluster, which might have different
hardware in different places. Read its docs and talk to your admins.

Mark

On Mon, Sep 18, 2017 at 4:07 AM Vidya R  wrote:

> Thank you for your reply.
>
> When I try to run my job in a single processor through qsub command, (by
> feeding the gromacs mdrun command in script file), it says SEGMENTATION
> FAULT, CORE DUMPED...
>
>
> But, when I run my job in login node (which we are not supposed to do), it
> works very well...
>
>
> Can you comment on this?
>
>
> Thanks,
> Vidya.R
>
> On Mon, Sep 18, 2017 at 2:50 AM, Mark Abraham 
> wrote:
>
> > Hi,
> >
> > You're running a thread-MPI version of GROMACS, which is probably not
> what
> > you want to do if you're running mpirun. It should work even so, but
> > whatever quirks exist with SGE are unfortunately between you, its docs
> and
> > your cluster's docs and admins :-(
> >
> > Mark
> >
> > On Sun, Sep 17, 2017 at 7:23 AM Vidya R  wrote:
> >
> > > My log file is provided in the link below
> > >
> > > Can you please look into it and let me know why the error arises?
> > >
> > > I am feeding my commands in SGE cluster.   When I run it in my login
> > node,
> > > gmx mdrun -v -deffnm eql runs well
> > >
> > >
> > > But, through qsub command, (with 8 processors) It says,
> > >
> > > mpirun noticed that process rank 7 with PID 19160 on node
> > > compute-0-28.local exited on signal 11 (Segmentation fault).
> > >
> > > Please help me.
> > >
> > > I am unable to figure out, as to whether the problem is with the
> version
> > of
> > > gromacs or the method of compiling.
> > >
> > >
> > >
> > > https://drive.google.com/file/d/0BxGqxeGwTDLbQW9OZDFuM1doUlU/
> > view?usp=sharing
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] PMF from constant force..

2017-09-18 Thread Nikhil Maroli
Dear all,

Is there any way to obtain PMF from; pull  =  constant force.? I have seen
wham only supports 'umbrella'

When i try to use umbrella my molecule is not travelling through the
channel.

Thanks in advance.
-- 
Regards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Blank Coordinate File

2017-09-18 Thread Souparno Adhikary
We were trying to solvate a small peptide (octamer) in water to run a 100ns
simulation of it.

In the pdb2gmx step, we have provided -ignh and -ter options (with NH3+ and
COO-) and used the gromos53a6 forcefield. It did not generate any error.
But it is generating a blank coordinate file (.gro). Here is the output of
the pdb2gmx command:-

End terminus ARG-9: COO-
Checking for duplicate atoms
Generating any missing hydrogen atoms and/or adding termini.
Now there are 8 residues with 139 atoms
Making bonds...
Number of bonds was 143, now 138
Generating angles, dihedrals and pairs...

WARNING: WARNING: Residue 1 named ARG of a molecule in the input file was
mapped
to an entry in the topology database, but the atom H used in
an interaction of type angle in that entry is not found in the
input file. Perhaps your atom and/or residue naming needs to be
fixed.



WARNING: WARNING: Residue 8 named ARG of a molecule in the input file was
mapped
to an entry in the topology database, but the atom O used in
an interaction of type angle in that entry is not found in the
input file. Perhaps your atom and/or residue naming needs to be
fixed.


Before cleaning: 238 pairs
Before cleaning: 238 dihedrals
Making cmap torsions...
There are   79 dihedrals,   55 impropers,  195 angles
   238 pairs,  138 bonds and 0 virtual sites
Total mass 1275.585 a.m.u.
Total charge 8.000 e
Writing topology

Back Off! I just backed up posre.itp to ./#posre.itp.2#

Writing coordinate file...

Back Off! I just backed up polyr.gro to ./#polyr.gro.2#
- PLEASE NOTE 
You have successfully generated a topology from: polyr.pdb.
The Gromos53a6 force field and the spc water model are used.
- ETON ESAELP 

gcq#415: "In a talk you have a choice: You can make one point or no
points." (Paul Sigler)


Any help??? Please???

Souparno Adhikary,
CHPC Lab,
Department of Microbiology,
University of Calcutta.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding calculation of Lennard-Jones potential

2017-09-18 Thread Erik Marklund
Dear Dilip,

The LJ parameters are present in the topology file and the force-field files 
included within. So no need to calculate anything, just to locate them in said 
files.

Kind regards,
Erik
__
Erik Marklund, PhD, Marie Skłodowska Curie INCA Fellow
Department of Chemistry – BMC, Uppsala University
+46 (0)18 471 4539
erik.markl...@kemi.uu.se

On 18 Sep 2017, at 05:42, Dilip H N 
> wrote:

Hello,
I have a simulation mixture of aminoacid (eg., glycine) with water and
cosolvent. I want to calculate Lennard-Jones Parameters of the all atom
types. How can i calculate it...??

Can it be done with commands, or any other method..??
Any suggestions are appreciated...

Thank you..

--
With Best Regards,

DILIP.H.N
Ph.D Student



 Sent with Mailtrack

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Changes in the simulation box after the production run

2017-09-18 Thread Mahsa E
Thank you very much for the links!

Best regards,
Mahsa

On Mon, Sep 18, 2017 at 1:30 AM, Dallas Warren 
wrote:

> These two images will help you see what is going on:
>
> https://twitter.com/dr_dbw/status/909559339366572032 - shows a
> molecule that appears to be outside the box.
>
> https://twitter.com/dr_dbw/status/909559783291723776 - however, that
> molecule actually enters through the opposite face of the box.
> Catch ya,
>
> Dr. Dallas Warren
> Drug Delivery, Disposition and Dynamics
> Monash Institute of Pharmaceutical Sciences, Monash University
> 381 Royal Parade, Parkville VIC 3052
> dallas.war...@monash.edu
> -
> When the only tool you own is a hammer, every problem begins to resemble a
> nail.
>
>
> On 18 September 2017 at 09:22, Mahsa E  wrote:
> > Thank you for you quick reply, Justin and Dallas! Very good point!
> >
> > Best regards,
> > Mahsa
> >
> >
> >
> >
> >
> > On Mon, Sep 18, 2017 at 1:00 AM, Justin Lemkul  wrote:
> >
> >>
> >>
> >> On 9/17/17 6:57 PM, Mahsa E wrote:
> >>
> >>> Could you please see the link below for the input and output simulation
> >>> box:
> >>>
> >>> https://www.dropbox.com/sh/kb36ake7mj5iovh/AABPF4_
> FUfvSPZxdO5WN3JnEa?dl=0
> >>>
> >>>
> >>> Actually, I thought since some of the chains went out of the simulation
> >>> box, then density have been changed. In my previous experience with
> >>> another
> >>> polymer, I didn't see this difference in the systems after the
> production
> >>> run, so I'm wondering if this is related to the stability of the
> system?
> >>>
> >>>
> >> As Dallas said, this is just a periodicity/visualization effect -
> there's
> >> no such thing as "outside" a periodic cell.
> >>
> >> Your "before MD" has "broken" molecules, i.e. all the atoms are
> visualized
> >> as being in the central image.  Your "after MD" is just those molecules
> >> made whole.  If you make the initial frame whole (trjconv -pbc whole),
> you
> >> will see a similar configuration.
> >>
> >> -Justin
> >>
> >>
> >> Best regards,
> >>> Mahsa
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Mon, Sep 18, 2017 at 12:10 AM, Dallas Warren <
> dallas.war...@monash.edu
> >>> >
> >>> wrote:
> >>>
> >>> Because that is how the system changed within the simulation time?
> 
>  What exactly is the problem as you see it, and why do you think it is
> a
>  problem?
> 
>  And remember, you have a periodic boundary condition that means the
>  one edge of the box wraps around to the opposite one.  So "out of the
>  box" is a visualisation artefact, not a "problem".
>  http://www.gromacs.org/Documentation/Terminology/
>  Periodic_Boundary_Conditions
>  Catch ya,
> 
>  Dr. Dallas Warren
>  Drug Delivery, Disposition and Dynamics
>  Monash Institute of Pharmaceutical Sciences, Monash University
>  381 Royal Parade, Parkville VIC 3052
>  dallas.war...@monash.edu
>  -
>  When the only tool you own is a hammer, every problem begins to
> resemble
>  a
>  nail.
> 
> 
>  On 18 September 2017 at 06:31, Mahsa E  wrote:
> 
> > Dear gmx-users,
> >
> > I did a 200 ns production md run in NVT ensemble for a simulation
> box of
> > polymer chains. Before this step, I did the energy minimisation, NVT
> and
> > NPT equilibration on the system. The problem is after the production
> >
>  run, I
> 
> > don't get the initial equilibrated packed box of polymer and it seems
> >
>  more
> 
> > like a circular shape with some parts of the chains out of the box.
> What
> >
>  is
> 
> > the reason for getting this result?
> > For the MD run I used the mdp file below:
> >
> > ; 7.3.2 Preprocessing
> >
> > ;define  =   ; defines to pass to the
> preprocessor
> >
> >
> > ; 7.3.3 Run Control
> >
> > integrator  = md; md integrator
> >
> > tinit   = 0 ; [ps] starting time
> for
> >
>  run
> 
> >
> > dt  = 0.002 ; [ps] time step for
> > integration
> >
> > nsteps  = 1; maximum number
> of
> > steps to integrate, 0.002 * 1 = 20 ps
> >
> > comm_mode   = Linear; remove center of
> mass
> > translation
> >
> > nstcomm = 100 ; [steps]
> frequency of
> > mass motion removal
> >
> > ;comm_grps   = Protein Non-Protein   ; group(s) for
> center
> > of
> > mass motion removal
> >
> >
> > ; 7.3.8 Output Control
> >
> > nstxout = 0 ; [steps] freq to write
> coordinates
> >
>  to
> 
> > trajectory
> 

[gmx-users] Set up the deformation rates for DEFORM

2017-09-18 Thread Own 12121325
Hello,

with the aim to simulate a shear stress using DEFORM non-equilibrium
option, I need to calibrate the deformations rates of the box (nm ps-1)
against the experimental value characterized for shear stress (10 dyn/cm2 ~
1 pascal).

Therefore, I would like to know what deformations rates set up in mdp file
should correspond to the value of the 10 dyn/cm2, assuming that I am
simulating my system with standard params for the barostat? I would be
grateful for some formula suitable for such reasonable conversions.

Thanks!

Gleb
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Set up the deformation rates for DEFORM

2017-09-18 Thread Own 12121325
Hello,

with the aim to simulate a shear stress using DEFORM non-equilibrium
option, I need to calibrate the deformations rates of the box (nm ps-1)
against the experimental value characterized for shear stress (10 dyn/cm2 ~
1 pascal).

Therefore, I would like to know what deformations rates set up in mdp file
should correspond to the value of the 10 dyn/cm2, assuming that I am
simulating my system with standard params for the barostat? I would be
grateful for some formula suitable for such reasonable conversions.

Thanks!

Gleb
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding calculation of Lennard-Jones potential

2017-09-18 Thread Tushar Ranjan Moharana
You can do that by g_energy (or gmx energy). but before that you have to
create separate energy group and mention in .mdp file prior to md run. or
you can rerun the trajectory with above changes.

-- 
Tushar Ranjan Moharana
B. Tech, NIT Warangal
Ph D Student, CCMB
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.