Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Rashmi
Hi,

As written on the website, g_mmpbsa does not directly support MPI. g_mmpbsa
does not include any code concerning OpenMP and MPI. However, We have tried
to interface MPI and OpenMP functionality of APBS by some mechanism.

One may use g_mmpbsa with MPI by following: (1) allocate number of
processors through queue management system, (2) define APBS environment
variable (export APBS="mpirun -np 8 apbs") that includes all required
flags, then start g_mmpbsa directly without using mpirun (or any similar
program). If queue management system specifically requires aprun/mpirun for
execution of program, g_mmpbsa might not work in this case.

One may use g_mmpbsa with OpenMP by following: (1)  allocate number of
threads through queue management system, (2) define OMP_NUM_THREADS
variable for allocated number of threads and (3) execute g_mmpbsa.

We have not tested simultaneous use of both MPI and OpenMP, so we do not
know that it will work.

Concerning standard input for g_mmpbsa, if echo or < wrote:

>
> hi again,
>
> 3 answers are hidden somewhere below ..
>
>
> Am 28.10.2015 um 15:45 schrieb Mark Abraham:
>
>> Hi,
>>
>> On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:
>>
>>
>>> Am 27.10.2015 um 23:57 schrieb Mark Abraham:
>>>
 Hi,


 On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:

 hi mark,
>
> many thanks. but can you be a little more precise? the author's only
> hint regarding mpi is on this site
> "http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html; and related
> to
> APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.
>
> the error i'm observing is occurring pretty much before apbs is
> started.
> to be honest, i can't see any link to my initial question ...
>
> It has the sentence "Although g_mmpbsa does not support mpirun..."
 aprun

>>> is
>>>
 a form of mpirun, so I assumed you knew that what you were trying was
 actually something that could work, which would therefore have to be
 with
 the APBS back end. The point of what it says there is that you don't run
 g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
 avoids the problem entirely because your redirected/interactive input

>>> goes
>>>
 to a single g_mmpbsa as normal, which then launches APBS with MPI

>>> support.
>>>
 Tool authors need to actively write code to be useful with MPI, so
 unless
 you know what you are doing is supposed to work with MPI because they
 say
 it works, don't try.

 Mark

>>> you are right. it's apbs which ought to run in parallel mode. of course,
>>> i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export
>>> OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
>>> independent g_mmpbsa processes. the problem is that i must start
>>> g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).
>>>
>>
>> No. Your job runs a shell script on your compute node. It can do anything
>> it likes, but it would make sense to run something in parallel at some
>> point. You need to build a g_mmpbsa that you can just run in a shell
>> script
>> that echoes in the input (try that on its own first). Then you use the
>> above approach so that the single process that is g_mmpbsa does the call
>> to
>> aprun (which is the cray mpirun) to run APBS in MPI mode.
>>
>> It is likely that even if you run g_mmpbsa with aprun and solve the input
>> issue somewhow, the MPI runtime will refuse to start the child APBS with
>> aprun, because nesting is typically unsupported (and your current command
>> lines haven't given it enough information to do a good job even if it is
>> supported).
>>
>
> yes, i've encountered issues with nested aprun calls. so this will hardly
> work i guess.
>
>
>> i absolutely
>>> cannot see any other way of running apbs when using it out of g_mmpbs.
>>> hence, i need to run
>>>
>>> aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh
>>>
>>> This likely starts three copies of g_mmpbsa each of which expect terminal
>> input, which maybe you can teach aprun to manage, but then each g_mmpbsa
>> will then do its own APBS and this is completely not what you want.
>>
>
> hmm, to be honest, i would say this is exactly what i'm trying to achieve.
> isn't it? i want 3 independent g_mmpbsa runs each of which executed in
> another directory with its own APBS. by the way, all together i have 1800
> such directories each containing another trajectory.
>
> if someone is ever (within the next 20 hours!) able to figure out a
> solution for this purpose, i would be absolutely pleased.
>
>
> and of course i'm aware about having given 8 cores to g_mmpbsa, hoping
>>> that it is able to read my input and to run apbs which hopefully uses
>>> all of the 8 cores. the user input (choosing protein, then ligand),
>>> however, "Cannot [be] read". this issue occurs quite early during the
>>> g_mmpbsa 

[gmx-users] implicit solvent with periodic molecules

2015-10-29 Thread Анна Павловна Толстова
Hello, gmx_users.

Is that possible to simulate a protein adsorption on graphite sheet with
implicit solvent? It works good in vacuo and with explicit solvent with
periodic-molecules = yes. But implicit solvent requires no PBC so that
periodic molecules should be switched off too. Is there any idea how to
perform this simulation?

Best wishes,
Tolstova Ann
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 5ns simulation in 2 hours

2015-10-29 Thread Smith, Micholas D.
A few clairfying questions:

1) What is the size of the system?
2) What is your frame saving rate?
3) Are you using a GPU? What about MPI or OpenMP?

60ns a day (what 5ns/2hours) is not unheard of given a reasonable system size 
and GPU acceleration. If you are curious about the performance you can always 
check your log files and see what is going on.

-Micholas

===
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Sana Saeed 

Sent: Wednesday, October 28, 2015 11:24 PM
To: gromacs.org_gmx-users
Subject: [gmx-users] 5ns simulation in 2 hours

hi gmx users!i run 5 ns simulation and it took only 2 hours..is it possible? 
its showing no error..
Regards Autumn
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] implicit solvent with periodic molecules

2015-10-29 Thread Vitaly V. Chaban
Yes. It is formally possible.

I do not think that implicit solvation precludes PBC usage.








On Thu, Oct 29, 2015 at 9:40 AM, Анна Павловна Толстова <
tolst...@physics.msu.ru> wrote:

> Hello, gmx_users.
>
> Is that possible to simulate a protein adsorption on graphite sheet with
> implicit solvent? It works good in vacuo and with explicit solvent with
> periodic-molecules = yes. But implicit solvent requires no PBC so that
> periodic molecules should be switched off too. Is there any idea how to
> perform this simulation?
>
> Best wishes,
> Tolstova Ann
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] 5ns simulation in 2 hours

2015-10-29 Thread Jorge Fernandez-de-Cossio-Diaz
Also check your time step size. A large time step can lead to numerical
errors, but you will run more ns/day

On Thu, Oct 29, 2015 at 8:56 AM, Smith, Micholas D. 
wrote:

> A few clairfying questions:
>
> 1) What is the size of the system?
> 2) What is your frame saving rate?
> 3) Are you using a GPU? What about MPI or OpenMP?
>
> 60ns a day (what 5ns/2hours) is not unheard of given a reasonable system
> size and GPU acceleration. If you are curious about the performance you can
> always check your log files and see what is going on.
>
> -Micholas
>
> ===
> Micholas Dean Smith, PhD.
> Post-doctoral Research Associate
> University of Tennessee/Oak Ridge National Laboratory
> Center for Molecular Biophysics
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Sana
> Saeed 
> Sent: Wednesday, October 28, 2015 11:24 PM
> To: gromacs.org_gmx-users
> Subject: [gmx-users] 5ns simulation in 2 hours
>
> hi gmx users!i run 5 ns simulation and it took only 2 hours..is it
> possible? its showing no error..
> Regards Autumn
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 5ns simulation in 2 hours

2015-10-29 Thread gozde ergin
I run 20ns simulation in 3.5hours and I assumed it is correct.
My system size is 2.5x2.5x12.5nm with 512 water and 50 organics molecules.
I do not use GPU.

On Thu, Oct 29, 2015 at 2:04 PM, Jorge Fernandez-de-Cossio-Diaz <
j.cossio.d...@gmail.com> wrote:

> Also check your time step size. A large time step can lead to numerical
> errors, but you will run more ns/day
>
> On Thu, Oct 29, 2015 at 8:56 AM, Smith, Micholas D. 
> wrote:
>
> > A few clairfying questions:
> >
> > 1) What is the size of the system?
> > 2) What is your frame saving rate?
> > 3) Are you using a GPU? What about MPI or OpenMP?
> >
> > 60ns a day (what 5ns/2hours) is not unheard of given a reasonable system
> > size and GPU acceleration. If you are curious about the performance you
> can
> > always check your log files and see what is going on.
> >
> > -Micholas
> >
> > ===
> > Micholas Dean Smith, PhD.
> > Post-doctoral Research Associate
> > University of Tennessee/Oak Ridge National Laboratory
> > Center for Molecular Biophysics
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Sana
> > Saeed 
> > Sent: Wednesday, October 28, 2015 11:24 PM
> > To: gromacs.org_gmx-users
> > Subject: [gmx-users] 5ns simulation in 2 hours
> >
> > hi gmx users!i run 5 ns simulation and it took only 2 hours..is it
> > possible? its showing no error..
> > Regards Autumn
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 5ns simulation in 2 hours

2015-10-29 Thread Téletchéa Stéphane

Le 29/10/2015 04:24, Sana Saeed a écrit :

is it possible?

yes.

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Incomplete compiling Gromacs 5.1

2015-10-29 Thread Justin Lemkul



On 10/29/15 12:16 AM, Chunlei ZHANG wrote:

Dear All,
I was trying to compiling Gromscs 5.1 on a cluster with with two 10-core
Intel Xeon E5-2600 v3(Haswell)  on each node.
The command with cmake:
cmake .. -DCMAKE_C_COMPILER=mpiicc \
-DCMAKE_CXX_COMPILER=mpiicpc \
-DGMX_MPI=on -DGMX_OPENMP=on \
-DGMX_GPU=off \
-DGMX_SIMD=AVX2_256 \
-DGMX_DOUBLE=off \
-DCMAKE_INSTALL_PREFIX=/my/path/GMX5.1 \
-DBUILD_SHARED_LIBS=off \
-DGMX_FFT_LIBRARY=MKL

Then, I execute make, but received the following message:
"make[2]: warning:  Clock skew detected.  Your build may be incomplete."

Then, make install:
In the bin folder, I only found these executables:
demux.pl  gmx-completion.bash  gmx-completion-gmx_mpi.bash  gmx_mpi  GMXRC
  GMXRC.bash  GMXRC.csh  GMXRC.zsh  xplor2gmx.pl

Could anyone suggest possible solution to this?


Solution to what?  There is only one binary in version 5.1, called "gmx" (or 
"gmx_mpi" in the case of an MPI build).


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Vedat Durmaz


after several days of trial and error, i was told only today that our 
HPC indeed has one cluster/queue (40 core nodes SMP) that does not 
require the use of aprun/mprun. so, after having compiled all the tools 
again on that cluster, i am finally able to execute many processes per node.


(however, we were not able to remedy the other issue regarding "aprun" 
in between. nevertheless, i'm fine now.)


thanks for your help guys and good evening

vedat



Am 29.10.2015 um 12:53 schrieb Rashmi:

Hi,

As written on the website, g_mmpbsa does not directly support MPI. g_mmpbsa
does not include any code concerning OpenMP and MPI. However, We have tried
to interface MPI and OpenMP functionality of APBS by some mechanism.

One may use g_mmpbsa with MPI by following: (1) allocate number of
processors through queue management system, (2) define APBS environment
variable (export APBS="mpirun -np 8 apbs") that includes all required
flags, then start g_mmpbsa directly without using mpirun (or any similar
program). If queue management system specifically requires aprun/mpirun for
execution of program, g_mmpbsa might not work in this case.

One may use g_mmpbsa with OpenMP by following: (1)  allocate number of
threads through queue management system, (2) define OMP_NUM_THREADS
variable for allocated number of threads and (3) execute g_mmpbsa.

We have not tested simultaneous use of both MPI and OpenMP, so we do not
know that it will work.

Concerning standard input for g_mmpbsa, if echo or < wrote:


hi again,

3 answers are hidden somewhere below ..


Am 28.10.2015 um 15:45 schrieb Mark Abraham:


Hi,

On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:



Am 27.10.2015 um 23:57 schrieb Mark Abraham:


Hi,


On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:

hi mark,

many thanks. but can you be a little more precise? the author's only
hint regarding mpi is on this site
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html; and related
to
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.

the error i'm observing is occurring pretty much before apbs is
started.
to be honest, i can't see any link to my initial question ...

It has the sentence "Although g_mmpbsa does not support mpirun..."

aprun


is


a form of mpirun, so I assumed you knew that what you were trying was
actually something that could work, which would therefore have to be
with
the APBS back end. The point of what it says there is that you don't run
g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
avoids the problem entirely because your redirected/interactive input


goes


to a single g_mmpbsa as normal, which then launches APBS with MPI


support.


Tool authors need to actively write code to be useful with MPI, so
unless
you know what you are doing is supposed to work with MPI because they
say
it works, don't try.

Mark


you are right. it's apbs which ought to run in parallel mode. of course,
i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export
OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
independent g_mmpbsa processes. the problem is that i must start
g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).


No. Your job runs a shell script on your compute node. It can do anything
it likes, but it would make sense to run something in parallel at some
point. You need to build a g_mmpbsa that you can just run in a shell
script
that echoes in the input (try that on its own first). Then you use the
above approach so that the single process that is g_mmpbsa does the call
to
aprun (which is the cray mpirun) to run APBS in MPI mode.

It is likely that even if you run g_mmpbsa with aprun and solve the input
issue somewhow, the MPI runtime will refuse to start the child APBS with
aprun, because nesting is typically unsupported (and your current command
lines haven't given it enough information to do a good job even if it is
supported).


yes, i've encountered issues with nested aprun calls. so this will hardly
work i guess.



i absolutely

cannot see any other way of running apbs when using it out of g_mmpbs.
hence, i need to run

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh

This likely starts three copies of g_mmpbsa each of which expect terminal

input, which maybe you can teach aprun to manage, but then each g_mmpbsa
will then do its own APBS and this is completely not what you want.


hmm, to be honest, i would say this is exactly what i'm trying to achieve.
isn't it? i want 3 independent g_mmpbsa runs each of which executed in
another directory with its own APBS. by the way, all together i have 1800
such directories each containing another trajectory.

if someone is ever (within the next 20 hours!) able to figure out a
solution for this purpose, i would be absolutely pleased.


and of course i'm aware about having given 8 cores to g_mmpbsa, hoping

that it is able to read my input and to run apbs which hopefully uses
all of the 

Re: [gmx-users] Virtual Sites in protein-ligand systems

2015-10-29 Thread Justin Lemkul



On 10/28/15 9:57 AM, Joan Clark Nicolas wrote:

Dear gmx users,
I am trying to run MD calculations on a protein-ligand system using Virtual
Sites, but as I generate my protein and ligand topologies separately (with
pdb2gmx and acpype, respectively), the VS for the ligand are not generated.

Does anyone know a way to generate the VS for the ligand without adding it
to the force field?



One can define any [ virtual_sites* ] directive manually in the topology. 
pdb2gmx can build some types of virtual sites, but you'd have to tell us 
specifically what it is you're trying to do if you want anything really useful. 
 Soon I will upload a patch that will make virtual site construction easier, 
and intrinsic to pdb2gmx.  I just need to find the time to work out the kinks...


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] frequency of contact between all residue pairs

2015-10-29 Thread Dina Mirijanian
Hi Erik,

Thank you for the suggestion.  But, will this not only give me H-bond
contacts only? I am looking for any contact. The only way I can see to make
this work is to make all atoms donors and acceptors, and I do not know how
to do this.  Am I missing something or is there a simple way to do this?
Thanks so much.
-Dina

On Wed, Oct 21, 2015 at 11:09 AM, Erik Marklund  wrote:

> Hi Dina,
>
> g_hbond -contact -hbn -hbm, then post-process the resulting contact matrix
> to represent residue-residue interactions.
>
> Kind regards,
> Erik
>
> Erik Marklund, PhD
> Postdoctoral Research Fellow
> Fulford JRF, Somerville College
>
> Department of Chemistry
> Physical & Theoretical Chemistry Laboratory
> University of Oxford
> South Parks Road
> Oxford
> OX1 3QZ
>
> > On 21 Oct 2015, at 15:56, Dina Mirijanian  wrote:
> >
> > Hello,
> >
> > I need to calculate the contact frequence between all pairs of residues
> in
> > my protein within a certain cutoff. I was trying to figure out a way to
> do
> > this with g_mindist.  But I have not been able to get it do what I want.
> > Can someone tell me what is the best way to calculate the contact
> frequence
> > between all residues from a trajectory?
> > Thank you so much.
> > -Dina
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Justin Lemkul



On 10/29/15 4:50 PM, Mishelle Oña wrote:


Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water
system. For equilibrate the system I have made NVT, NPT and Process dynamics.
The fiinal dynamic should have 40 000 frames but when I load it in VMD it has
only 12 186 frames. Also the confout.gro file that result from the dynamic


VMD probably ran out of memory.  What it thinks is there doesn't necessarily 
reflect reality.  Use gmxcheck on the trajectory to verify its contents.  Then 
try stripping out waters with trjconv and loading that in VMD.


-Justin


shows the polimer out of the box. I tried to center the polimer. At the end
of the simultation this message appeared: Reading file topol.tpr, VERSION
4.5.5 (single precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2



WARNING: This run will generate roughly 8233 Mb of data



starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'

1000 steps, 2.0 ps.



NOTE: Turning on dynamic load balancing





Writing final coordinates.



Average load imbalance: 7.6 %

Part of the total run time spent waiting due to load imbalance: 1.7 %

Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 5 %
Y 9 % Z 14 %





Parallel run - timing based on wallclock.




NODE (s)   Real (s)  (%)


Time:   9229.145   9229.145 100.0


2h33:49


(Mnbf/s)   (GFlops)   (ns/day) (hour/ns)

Performance: 1786.838 66.381187.233 0.128



gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)

I don´t know if there is any error in the dynamic

Then I opened the md.log file and in some steps there was this line:DD  load
balancing is limited by minimum cell size in dimension X YDD  step 9998749
vol min/aver 0.625! load imb.: force  6.3% Please could anyone help me with
an idea of what is happening? The previous simulation (NPT) doesn´t have this
messages. Thanks a lot Mishelle







--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Welcome to the "gromacs.org_gmx-users" mailing list

2015-10-29 Thread Mishelle Oña

Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water 
system. For equilibrate the system I have made NVT, NPT and Process dynamics. 
The fiinal dynamic should have 20 000 frames but when I load it in VMD it has 
only 12 186 frames. Also the confout.gro file that result from the dynamic 
shows the polimer out of the box. I tried to center the polimer. At the end of 
the simultation this message appeared:
Reading file topol.tpr, VERSION 4.5.5 (single
precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2

 

WARNING: This run will generate roughly 8233 Mb of
data

 

starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9
in water'

1000 steps, 
2.0 ps.

 

NOTE: Turning on dynamic load balancing

 

 

Writing final coordinates.

 

 Average load
imbalance: 7.6 %

 Part of the
total run time spent waiting due to load imbalance: 1.7 %

 Steps where the
load balancing was limited by -rdd, -rcon and/or -dds: X 5 % Y 9 % Z 14 %

 

 

Parallel
run - timing based on wallclock.

 

  
NODE (s)   Real (s)  (%)

  
Time:   9229.145   9229.145   
100.0

  
2h33:49

  
(Mnbf/s)   (GFlops)   (ns/day) 
(hour/ns)

Performance:  
1786.838 66.381187.233 
0.128

 

gcq#247: "Let's Unzip And Let's Unfold" (Red
Hot Chili Peppers)

I don´t know if there is any error in the dynamic

 Then I opened the md.log file and in some steps there was this line:DD  load 
balancing is limited by minimum cell size in dimension X YDD  step 9998749  vol 
min/aver 0.625! load imb.: force  6.3%
Please could anyone help me with an idea of what is happening? The previous 
simulation (NPT) doesn´t have this messages. 
Thanks a lot
Mishelle



  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Mishelle Oña

Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water 
system. For equilibrate the system I have made NVT, NPT and Process dynamics. 
The fiinal dynamic should have 40 000 frames but when I load it in VMD it has 
only 12 186 frames. Also the confout.gro file that result from the dynamic 
shows the polimer out of the box. I tried to center the polimer. At the end of 
the simultation this message appeared:
Reading file topol.tpr, VERSION 4.5.5 (single
precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2

 

WARNING: This run will generate roughly 8233 Mb of
data

 

starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9
in water'

1000 steps, 
2.0 ps.

 

NOTE: Turning on dynamic load balancing

 

 

Writing final coordinates.

 

 Average load
imbalance: 7.6 %

 Part of the
total run time spent waiting due to load imbalance: 1.7 %

 Steps where the
load balancing was limited by -rdd, -rcon and/or -dds: X 5 % Y 9 % Z 14 %

 

 

Parallel
run - timing based on wallclock.

 

  
NODE (s)   Real (s)  (%)

  
Time:   9229.145   9229.145   
100.0

  
2h33:49

  
(Mnbf/s)   (GFlops)   (ns/day) 
(hour/ns)

Performance:  
1786.838 66.381187.233 
0.128

 

gcq#247: "Let's Unzip And Let's Unfold" (Red
Hot Chili Peppers)

I don´t know if there is any error in the dynamic

 Then I opened the md.log file and in some steps there was this line:DD  load 
balancing is limited by minimum cell size in dimension X YDD  step 9998749  vol 
min/aver 0.625! load imb.: force  6.3%
Please could anyone help me with an idea of what is happening? The previous 
simulation (NPT) doesn´t have this messages. 
Thanks a lot
Mishelle




  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Mishelle Oña
Well, I am trying to calculate the Solvation free energy of my molecule, I am 
following the Hands-on tutorial
Solvation free energy of ethanol  of Sander Pronk. Using trjconv I cut one 
frame from this trajectory and make all the steps of the tutorial. When I do  
g_bar to calculate the free energy there was an error:WARNING: Some of these 
results violate the Second Law
of Thermodynamics: 

 This is
can be the result of severe undersampling, or (more likely)

 there
is something wrong with the simulations.
I am not sure why this error come from. Could you tell me if I should redo the 
dynamic? or what is the most suitable answer for this error.
Thanks a lotMishelle

> To: gmx-us...@gromacs.org
> From: jalem...@vt.edu
> Date: Thu, 29 Oct 2015 17:48:59 -0400
> Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
> the correct number of frames
> 
> 
> 
> On 10/29/15 5:46 PM, Mishelle Oña wrote:
> > Hi Justin, Thanks for your reply. I used gmxcheck to verify the trajectory 
> > and it didn´t have errors. I got this:
> > Item#frames Timestep (ps)Step 400010.5Time 
> > 400010.5Lambda   400010.5Coords   400010.5Velocities   
> > 400010.5Forces   0Box  400010.5
> > I am not sure if the "Forces" item is correct or not.
> > Could you tell me why it is 0 ?
> 
> You have nstfout = 0 in your .mdp file.  Only the data you request are saved.
> 
> -Justin
> 
> > Thanks Mishelle
> >> To: gmx-us...@gromacs.org
> >> From: jalem...@vt.edu
> >> Date: Thu, 29 Oct 2015 16:52:57 -0400
> >> Subject: Re: [gmx-users] There is no error message but the dynamic don´t 
> >> show the correct number of frames
> >>
> >>
> >>
> >> On 10/29/15 4:50 PM, Mishelle Oña wrote:
> >>>
> >>> Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a 
> >>> water
> >>> system. For equilibrate the system I have made NVT, NPT and Process 
> >>> dynamics.
> >>> The fiinal dynamic should have 40 000 frames but when I load it in VMD it 
> >>> has
> >>> only 12 186 frames. Also the confout.gro file that result from the dynamic
> >>
> >> VMD probably ran out of memory.  What it thinks is there doesn't 
> >> necessarily
> >> reflect reality.  Use gmxcheck on the trajectory to verify its contents.  
> >> Then
> >> try stripping out waters with trjconv and loading that in VMD.
> >>
> >> -Justin
> >>
> >>> shows the polimer out of the box. I tried to center the polimer. At the 
> >>> end
> >>> of the simultation this message appeared: Reading file topol.tpr, VERSION
> >>> 4.5.5 (single precision)
> >>>
> >>> Starting 32 threads
> >>>
> >>> Making 3D domain decomposition 8 x 2 x 2
> >>>
> >>>
> >>>
> >>> WARNING: This run will generate roughly 8233 Mb of data
> >>>
> >>>
> >>>
> >>> starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'
> >>>
> >>> 1000 steps, 2.0 ps.
> >>>
> >>>
> >>>
> >>> NOTE: Turning on dynamic load balancing
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Writing final coordinates.
> >>>
> >>>
> >>>
> >>> Average load imbalance: 7.6 %
> >>>
> >>> Part of the total run time spent waiting due to load imbalance: 1.7 %
> >>>
> >>> Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 
> >>> 5 %
> >>> Y 9 % Z 14 %
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Parallel run - timing based on wallclock.
> >>>
> >>>
> >>>
> >>>
> >>> NODE (s)   Real (s)  (%)
> >>>
> >>>
> >>> Time:   9229.145   9229.145 100.0
> >>>
> >>>
> >>> 2h33:49
> >>>
> >>>
> >>> (Mnbf/s)   (GFlops)   (ns/day) (hour/ns)
> >>>
> >>> Performance: 1786.838 66.381187.233 0.128
> >>>
> >>>
> >>>
> >>> gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)
> >>>
> >>> I don´t know if there is any error in the dynamic
> >>>
> >>> Then I opened the md.log file and in some steps there was this line:DD  
> >>> load
> >>> balancing is limited by minimum cell size in dimension X YDD  step 9998749
> >>> vol min/aver 0.625! load imb.: force  6.3% Please could anyone help me 
> >>> with
> >>> an idea of what is happening? The previous simulation (NPT) doesn´t have 
> >>> this
> >>> messages. Thanks a lot Mishelle
> >>>
> >>>
> >>>
> >>>
> >>>
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Ruth L. Kirschstein NRSA Postdoctoral Fellow
> >>
> >> Department of Pharmaceutical Sciences
> >> School of Pharmacy
> >> Health Sciences Facility II, Room 629
> >> University of Maryland, Baltimore
> >> 20 Penn St.
> >> Baltimore, MD 21201
> >>
> >> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> >> http://mackerell.umaryland.edu/~jalemkul
> >>
> >> ==
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at 
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> 

Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Justin Lemkul



On 10/29/15 6:08 PM, Mishelle Oña wrote:

Well, I am trying to calculate the Solvation free energy of my molecule, I am 
following the Hands-on tutorial
Solvation free energy of ethanol  of Sander Pronk. Using trjconv I cut one 
frame from this trajectory and make all the steps of the tutorial. When I do  
g_bar to calculate the free energy there was an error:WARNING: Some of these 
results violate the Second Law
of Thermodynamics:

  This is
can be the result of severe undersampling, or (more likely)

  there
is something wrong with the simulations.
I am not sure why this error come from. Could you tell me if I should redo the 
dynamic? or what is the most suitable answer for this error.


I doubt you'll be able to get a properly converged answer for a polymer of that 
size with these free energy methods.  Without seeing a full .mdp file, there's 
not much to go on.  You should also do normal convergence checks of the dynamics.


-Justin


Thanks a lotMishelle


To: gmx-us...@gromacs.org
From: jalem...@vt.edu
Date: Thu, 29 Oct 2015 17:48:59 -0400
Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
the correct number of frames



On 10/29/15 5:46 PM, Mishelle Oña wrote:

Hi Justin, Thanks for your reply. I used gmxcheck to verify the trajectory and 
it didn´t have errors. I got this:
Item#frames Timestep (ps)Step 400010.5Time 40001
0.5Lambda   400010.5Coords   400010.5Velocities   40001
0.5Forces   0Box  400010.5
I am not sure if the "Forces" item is correct or not.
Could you tell me why it is 0 ?


You have nstfout = 0 in your .mdp file.  Only the data you request are saved.

-Justin


Thanks Mishelle

To: gmx-us...@gromacs.org
From: jalem...@vt.edu
Date: Thu, 29 Oct 2015 16:52:57 -0400
Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
the correct number of frames



On 10/29/15 4:50 PM, Mishelle Oña wrote:


Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water
system. For equilibrate the system I have made NVT, NPT and Process dynamics.
The fiinal dynamic should have 40 000 frames but when I load it in VMD it has
only 12 186 frames. Also the confout.gro file that result from the dynamic


VMD probably ran out of memory.  What it thinks is there doesn't necessarily
reflect reality.  Use gmxcheck on the trajectory to verify its contents.  Then
try stripping out waters with trjconv and loading that in VMD.

-Justin


shows the polimer out of the box. I tried to center the polimer. At the end
of the simultation this message appeared: Reading file topol.tpr, VERSION
4.5.5 (single precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2



WARNING: This run will generate roughly 8233 Mb of data



starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'

1000 steps, 2.0 ps.



NOTE: Turning on dynamic load balancing





Writing final coordinates.



Average load imbalance: 7.6 %

Part of the total run time spent waiting due to load imbalance: 1.7 %

Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 5 %
Y 9 % Z 14 %





Parallel run - timing based on wallclock.




NODE (s)   Real (s)  (%)


Time:   9229.145   9229.145 100.0


2h33:49


(Mnbf/s)   (GFlops)   (ns/day) (hour/ns)

Performance: 1786.838 66.381187.233 0.128



gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)

I don´t know if there is any error in the dynamic

Then I opened the md.log file and in some steps there was this line:DD  load
balancing is limited by minimum cell size in dimension X YDD  step 9998749
vol min/aver 0.625! load imb.: force  6.3% Please could anyone help me with
an idea of what is happening? The previous simulation (NPT) doesn´t have this
messages. Thanks a lot Mishelle







--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.





--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201


Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Mishelle Oña
I attached the .mdp file I usedCould you tell me if there is a better form to 
calculate the free energy of my polimer?And how can I do normal convergence 
checks of the dynamics?
ThanksMishelle

> To: gmx-us...@gromacs.org
> From: jalem...@vt.edu
> Date: Thu, 29 Oct 2015 18:11:20 -0400
> Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
> the correct number of frames
> 
> 
> 
> On 10/29/15 6:08 PM, Mishelle Oña wrote:
> > Well, I am trying to calculate the Solvation free energy of my molecule, I 
> > am following the Hands-on tutorial
> > Solvation free energy of ethanol  of Sander Pronk. Using trjconv I cut one 
> > frame from this trajectory and make all the steps of the tutorial. When I 
> > do  g_bar to calculate the free energy there was an error:WARNING: Some of 
> > these results violate the Second Law
> > of Thermodynamics:
> >
> >   This is
> > can be the result of severe undersampling, or (more likely)
> >
> >   there
> > is something wrong with the simulations.
> > I am not sure why this error come from. Could you tell me if I should redo 
> > the dynamic? or what is the most suitable answer for this error.
> 
> I doubt you'll be able to get a properly converged answer for a polymer of 
> that 
> size with these free energy methods.  Without seeing a full .mdp file, 
> there's 
> not much to go on.  You should also do normal convergence checks of the 
> dynamics.
> 
> -Justin
> 
> > Thanks a lotMishelle
> >
> >> To: gmx-us...@gromacs.org
> >> From: jalem...@vt.edu
> >> Date: Thu, 29 Oct 2015 17:48:59 -0400
> >> Subject: Re: [gmx-users] There is no error message but the dynamic don´t 
> >> show the correct number of frames
> >>
> >>
> >>
> >> On 10/29/15 5:46 PM, Mishelle Oña wrote:
> >>> Hi Justin, Thanks for your reply. I used gmxcheck to verify the 
> >>> trajectory and it didn´t have errors. I got this:
> >>> Item#frames Timestep (ps)Step 400010.5Time 
> >>> 400010.5Lambda   400010.5Coords   400010.5Velocities  
> >>>  400010.5Forces   0Box  400010.5
> >>> I am not sure if the "Forces" item is correct or not.
> >>> Could you tell me why it is 0 ?
> >>
> >> You have nstfout = 0 in your .mdp file.  Only the data you request are 
> >> saved.
> >>
> >> -Justin
> >>
> >>> Thanks Mishelle
>  To: gmx-us...@gromacs.org
>  From: jalem...@vt.edu
>  Date: Thu, 29 Oct 2015 16:52:57 -0400
>  Subject: Re: [gmx-users] There is no error message but the dynamic don´t 
>  show the correct number of frames
> 
> 
> 
>  On 10/29/15 4:50 PM, Mishelle Oña wrote:
> >
> > Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a 
> > water
> > system. For equilibrate the system I have made NVT, NPT and Process 
> > dynamics.
> > The fiinal dynamic should have 40 000 frames but when I load it in VMD 
> > it has
> > only 12 186 frames. Also the confout.gro file that result from the 
> > dynamic
> 
>  VMD probably ran out of memory.  What it thinks is there doesn't 
>  necessarily
>  reflect reality.  Use gmxcheck on the trajectory to verify its contents. 
>   Then
>  try stripping out waters with trjconv and loading that in VMD.
> 
>  -Justin
> 
> > shows the polimer out of the box. I tried to center the polimer. At the 
> > end
> > of the simultation this message appeared: Reading file topol.tpr, 
> > VERSION
> > 4.5.5 (single precision)
> >
> > Starting 32 threads
> >
> > Making 3D domain decomposition 8 x 2 x 2
> >
> >
> >
> > WARNING: This run will generate roughly 8233 Mb of data
> >
> >
> >
> > starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'
> >
> > 1000 steps, 2.0 ps.
> >
> >
> >
> > NOTE: Turning on dynamic load balancing
> >
> >
> >
> >
> >
> > Writing final coordinates.
> >
> >
> >
> > Average load imbalance: 7.6 %
> >
> > Part of the total run time spent waiting due to load imbalance: 1.7 %
> >
> > Steps where the load balancing was limited by -rdd, -rcon and/or -dds: 
> > X 5 %
> > Y 9 % Z 14 %
> >
> >
> >
> >
> >
> > Parallel run - timing based on wallclock.
> >
> >
> >
> >
> > NODE (s)   Real (s)  (%)
> >
> >
> > Time:   9229.145   9229.145 100.0
> >
> >
> > 2h33:49
> >
> >
> > (Mnbf/s)   (GFlops)   (ns/day) (hour/ns)
> >
> > Performance: 1786.838 66.381187.233 0.128
> >
> >
> >
> > gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)
> >
> > I don´t know if there is any error in the dynamic
> >
> > Then I opened the md.log file and in some steps there was this line:DD  
> > load
> > balancing is limited by minimum cell size in dimension X YDD  

Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Justin Lemkul



On 10/29/15 6:21 PM, Mishelle Oña wrote:

I attached the .mdp file I usedCould you tell me if there is a better form to


The mailing list does not accept attachments.


calculate the free energy of my polimer?And how can I do normal convergence


I would run a much longer simulation (20 ns certainly isn't enough to sample the 
conformational ensemble of such a polymer) and look into MM/PBSA type calculations.



checks of the dynamics?


What you analyze depends on what you're after.

-Justin




To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct 2015
18:11:20 -0400 Subject: Re: [gmx-users] There is no error message but the
dynamic don´t show the correct number of frames



On 10/29/15 6:08 PM, Mishelle Oña wrote:

Well, I am trying to calculate the Solvation free energy of my molecule,
I am following the Hands-on tutorial Solvation free energy of ethanol  of
Sander Pronk. Using trjconv I cut one frame from this trajectory and make
all the steps of the tutorial. When I do  g_bar to calculate the free
energy there was an error:WARNING: Some of these results violate the
Second Law of Thermodynamics:

This is can be the result of severe undersampling, or (more likely)

there is something wrong with the simulations. I am not sure why this
error come from. Could you tell me if I should redo the dynamic? or what
is the most suitable answer for this error.


I doubt you'll be able to get a properly converged answer for a polymer of
that size with these free energy methods.  Without seeing a full .mdp file,
there's not much to go on.  You should also do normal convergence checks of
the dynamics.

-Justin


Thanks a lotMishelle


To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct 2015
17:48:59 -0400 Subject: Re: [gmx-users] There is no error message but
the dynamic don´t show the correct number of frames



On 10/29/15 5:46 PM, Mishelle Oña wrote:

Hi Justin, Thanks for your reply. I used gmxcheck to verify the
trajectory and it didn´t have errors. I got this: Item#frames
Timestep (ps)Step 400010.5Time 400010.5Lambda
400010.5Coords   400010.5Velocities   400010.5Forces
0Box  400010.5 I am not sure if the "Forces" item is
correct or not. Could you tell me why it is 0 ?


You have nstfout = 0 in your .mdp file.  Only the data you request are
saved.

-Justin


Thanks Mishelle

To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct
2015 16:52:57 -0400 Subject: Re: [gmx-users] There is no error
message but the dynamic don´t show the correct number of frames



On 10/29/15 4:50 PM, Mishelle Oña wrote:


Hi!I am simulating a polimer of Polylactic acid with 30 monomers
in a water system. For equilibrate the system I have made NVT,
NPT and Process dynamics. The fiinal dynamic should have 40 000
frames but when I load it in VMD it has only 12 186 frames. Also
the confout.gro file that result from the dynamic


VMD probably ran out of memory.  What it thinks is there doesn't
necessarily reflect reality.  Use gmxcheck on the trajectory to
verify its contents.  Then try stripping out waters with trjconv
and loading that in VMD.

-Justin


shows the polimer out of the box. I tried to center the polimer.
At the end of the simultation this message appeared: Reading file
topol.tpr, VERSION 4.5.5 (single precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2



WARNING: This run will generate roughly 8233 Mb of data



starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'

1000 steps, 2.0 ps.



NOTE: Turning on dynamic load balancing





Writing final coordinates.



Average load imbalance: 7.6 %

Part of the total run time spent waiting due to load imbalance:
1.7 %

Steps where the load balancing was limited by -rdd, -rcon and/or
-dds: X 5 % Y 9 % Z 14 %





Parallel run - timing based on wallclock.




NODE (s)   Real (s)  (%)


Time:   9229.145   9229.145 100.0


2h33:49


(Mnbf/s)   (GFlops)   (ns/day) (hour/ns)

Performance: 1786.838 66.381187.233 0.128



gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)

I don´t know if there is any error in the dynamic

Then I opened the md.log file and in some steps there was this
line:DD  load balancing is limited by minimum cell size in
dimension X YDD  step 9998749 vol min/aver 0.625! load imb.:
force  6.3% Please could anyone help me with an idea of what is
happening? The previous simulation (NPT) doesn´t have this
messages. Thanks a lot Mishelle







-- ==

Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral
Fellow

Department of Pharmaceutical Sciences School of Pharmacy Health
Sciences Facility II, Room 629 University of Maryland, Baltimore 20
Penn St. Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

== -- Gromacs Users
mailing list

* Please 

Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Mishelle Oña
I am looking for the energy of solvation. The g_mmpbsa tool could help me for 
it ?

> To: gmx-us...@gromacs.org
> From: jalem...@vt.edu
> Date: Thu, 29 Oct 2015 18:38:58 -0400
> Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
> the correct number of frames
> 
> 
> 
> On 10/29/15 6:21 PM, Mishelle Oña wrote:
> > I attached the .mdp file I usedCould you tell me if there is a better form 
> > to
> 
> The mailing list does not accept attachments.
> 
> > calculate the free energy of my polimer?And how can I do normal convergence
> 
> I would run a much longer simulation (20 ns certainly isn't enough to sample 
> the 
> conformational ensemble of such a polymer) and look into MM/PBSA type 
> calculations.
> 
> > checks of the dynamics?
> 
> What you analyze depends on what you're after.
> 
> -Justin
> 
> >
> >> To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct 2015
> >> 18:11:20 -0400 Subject: Re: [gmx-users] There is no error message but the
> >> dynamic don´t show the correct number of frames
> >>
> >>
> >>
> >> On 10/29/15 6:08 PM, Mishelle Oña wrote:
> >>> Well, I am trying to calculate the Solvation free energy of my molecule,
> >>> I am following the Hands-on tutorial Solvation free energy of ethanol  of
> >>> Sander Pronk. Using trjconv I cut one frame from this trajectory and make
> >>> all the steps of the tutorial. When I do  g_bar to calculate the free
> >>> energy there was an error:WARNING: Some of these results violate the
> >>> Second Law of Thermodynamics:
> >>>
> >>> This is can be the result of severe undersampling, or (more likely)
> >>>
> >>> there is something wrong with the simulations. I am not sure why this
> >>> error come from. Could you tell me if I should redo the dynamic? or what
> >>> is the most suitable answer for this error.
> >>
> >> I doubt you'll be able to get a properly converged answer for a polymer of
> >> that size with these free energy methods.  Without seeing a full .mdp file,
> >> there's not much to go on.  You should also do normal convergence checks of
> >> the dynamics.
> >>
> >> -Justin
> >>
> >>> Thanks a lotMishelle
> >>>
>  To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct 2015
>  17:48:59 -0400 Subject: Re: [gmx-users] There is no error message but
>  the dynamic don´t show the correct number of frames
> 
> 
> 
>  On 10/29/15 5:46 PM, Mishelle Oña wrote:
> > Hi Justin, Thanks for your reply. I used gmxcheck to verify the
> > trajectory and it didn´t have errors. I got this: Item#frames
> > Timestep (ps)Step 400010.5Time 400010.5Lambda
> > 400010.5Coords   400010.5Velocities   400010.5Forces
> > 0Box  400010.5 I am not sure if the "Forces" item is
> > correct or not. Could you tell me why it is 0 ?
> 
>  You have nstfout = 0 in your .mdp file.  Only the data you request are
>  saved.
> 
>  -Justin
> 
> > Thanks Mishelle
> >> To: gmx-us...@gromacs.org From: jalem...@vt.edu Date: Thu, 29 Oct
> >> 2015 16:52:57 -0400 Subject: Re: [gmx-users] There is no error
> >> message but the dynamic don´t show the correct number of frames
> >>
> >>
> >>
> >> On 10/29/15 4:50 PM, Mishelle Oña wrote:
> >>>
> >>> Hi!I am simulating a polimer of Polylactic acid with 30 monomers
> >>> in a water system. For equilibrate the system I have made NVT,
> >>> NPT and Process dynamics. The fiinal dynamic should have 40 000
> >>> frames but when I load it in VMD it has only 12 186 frames. Also
> >>> the confout.gro file that result from the dynamic
> >>
> >> VMD probably ran out of memory.  What it thinks is there doesn't
> >> necessarily reflect reality.  Use gmxcheck on the trajectory to
> >> verify its contents.  Then try stripping out waters with trjconv
> >> and loading that in VMD.
> >>
> >> -Justin
> >>
> >>> shows the polimer out of the box. I tried to center the polimer.
> >>> At the end of the simultation this message appeared: Reading file
> >>> topol.tpr, VERSION 4.5.5 (single precision)
> >>>
> >>> Starting 32 threads
> >>>
> >>> Making 3D domain decomposition 8 x 2 x 2
> >>>
> >>>
> >>>
> >>> WARNING: This run will generate roughly 8233 Mb of data
> >>>
> >>>
> >>>
> >>> starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'
> >>>
> >>> 1000 steps, 2.0 ps.
> >>>
> >>>
> >>>
> >>> NOTE: Turning on dynamic load balancing
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Writing final coordinates.
> >>>
> >>>
> >>>
> >>> Average load imbalance: 7.6 %
> >>>
> >>> Part of the total run time spent waiting due to load imbalance:
> >>> 1.7 %
> >>>
> >>> Steps where the load balancing was limited by -rdd, -rcon and/or
> >>> -dds: X 5 % Y 9 % Z 14 %
> 

Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Mishelle Oña
Hi Justin, Thanks for your reply. I used gmxcheck to verify the trajectory and 
it didn´t have errors. I got this:
Item#frames Timestep (ps)Step 400010.5Time 40001
0.5Lambda   400010.5Coords   400010.5Velocities   40001
0.5Forces   0Box  400010.5
I am not sure if the "Forces" item is correct or not. 
Could you tell me why it is 0 ?
Thanks Mishelle
> To: gmx-us...@gromacs.org
> From: jalem...@vt.edu
> Date: Thu, 29 Oct 2015 16:52:57 -0400
> Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
> the correct number of frames
> 
> 
> 
> On 10/29/15 4:50 PM, Mishelle Oña wrote:
> >
> > Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water
> > system. For equilibrate the system I have made NVT, NPT and Process 
> > dynamics.
> > The fiinal dynamic should have 40 000 frames but when I load it in VMD it 
> > has
> > only 12 186 frames. Also the confout.gro file that result from the dynamic
> 
> VMD probably ran out of memory.  What it thinks is there doesn't necessarily 
> reflect reality.  Use gmxcheck on the trajectory to verify its contents.  
> Then 
> try stripping out waters with trjconv and loading that in VMD.
> 
> -Justin
> 
> > shows the polimer out of the box. I tried to center the polimer. At the end
> > of the simultation this message appeared: Reading file topol.tpr, VERSION
> > 4.5.5 (single precision)
> >
> > Starting 32 threads
> >
> > Making 3D domain decomposition 8 x 2 x 2
> >
> >
> >
> > WARNING: This run will generate roughly 8233 Mb of data
> >
> >
> >
> > starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'
> >
> > 1000 steps, 2.0 ps.
> >
> >
> >
> > NOTE: Turning on dynamic load balancing
> >
> >
> >
> >
> >
> > Writing final coordinates.
> >
> >
> >
> > Average load imbalance: 7.6 %
> >
> > Part of the total run time spent waiting due to load imbalance: 1.7 %
> >
> > Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 5 %
> > Y 9 % Z 14 %
> >
> >
> >
> >
> >
> > Parallel run - timing based on wallclock.
> >
> >
> >
> >
> > NODE (s)   Real (s)  (%)
> >
> >
> > Time:   9229.145   9229.145 100.0
> >
> >
> > 2h33:49
> >
> >
> > (Mnbf/s)   (GFlops)   (ns/day) (hour/ns)
> >
> > Performance: 1786.838 66.381187.233 0.128
> >
> >
> >
> > gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)
> >
> > I don´t know if there is any error in the dynamic
> >
> > Then I opened the md.log file and in some steps there was this line:DD  load
> > balancing is limited by minimum cell size in dimension X YDD  step 9998749
> > vol min/aver 0.625! load imb.: force  6.3% Please could anyone help me with
> > an idea of what is happening? The previous simulation (NPT) doesn´t have 
> > this
> > messages. Thanks a lot Mishelle
> >
> >
> >
> >
> >
> 
> -- 
> ==
> 
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
> 
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
> 
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
> 
> ==
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] There is no error message but the dynamic don´t show the correct number of frames

2015-10-29 Thread Justin Lemkul



On 10/29/15 5:46 PM, Mishelle Oña wrote:

Hi Justin, Thanks for your reply. I used gmxcheck to verify the trajectory and 
it didn´t have errors. I got this:
Item#frames Timestep (ps)Step 400010.5Time 40001
0.5Lambda   400010.5Coords   400010.5Velocities   40001
0.5Forces   0Box  400010.5
I am not sure if the "Forces" item is correct or not.
Could you tell me why it is 0 ?


You have nstfout = 0 in your .mdp file.  Only the data you request are saved.

-Justin


Thanks Mishelle

To: gmx-us...@gromacs.org
From: jalem...@vt.edu
Date: Thu, 29 Oct 2015 16:52:57 -0400
Subject: Re: [gmx-users] There is no error message but the dynamic don´t show 
the correct number of frames



On 10/29/15 4:50 PM, Mishelle Oña wrote:


Hi!I am simulating a polimer of Polylactic acid with 30 monomers in a water
system. For equilibrate the system I have made NVT, NPT and Process dynamics.
The fiinal dynamic should have 40 000 frames but when I load it in VMD it has
only 12 186 frames. Also the confout.gro file that result from the dynamic


VMD probably ran out of memory.  What it thinks is there doesn't necessarily
reflect reality.  Use gmxcheck on the trajectory to verify its contents.  Then
try stripping out waters with trjconv and loading that in VMD.

-Justin


shows the polimer out of the box. I tried to center the polimer. At the end
of the simultation this message appeared: Reading file topol.tpr, VERSION
4.5.5 (single precision)

Starting 32 threads

Making 3D domain decomposition 8 x 2 x 2



WARNING: This run will generate roughly 8233 Mb of data



starting mdrun 'UNITED ATOM STRUCTURE FOR MOLECULE 3M9 in water'

1000 steps, 2.0 ps.



NOTE: Turning on dynamic load balancing





Writing final coordinates.



Average load imbalance: 7.6 %

Part of the total run time spent waiting due to load imbalance: 1.7 %

Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 5 %
Y 9 % Z 14 %





Parallel run - timing based on wallclock.




NODE (s)   Real (s)  (%)


Time:   9229.145   9229.145 100.0


2h33:49


(Mnbf/s)   (GFlops)   (ns/day) (hour/ns)

Performance: 1786.838 66.381187.233 0.128



gcq#247: "Let's Unzip And Let's Unfold" (Red Hot Chili Peppers)

I don´t know if there is any error in the dynamic

Then I opened the md.log file and in some steps there was this line:DD  load
balancing is limited by minimum cell size in dimension X YDD  step 9998749
vol min/aver 0.625! load imb.: force  6.3% Please could anyone help me with
an idea of what is happening? The previous simulation (NPT) doesn´t have this
messages. Thanks a lot Mishelle







--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.





--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Mark Abraham
Ok. I misunderstood your intention to be to run a single calculation with
three MPI domains sharing its work. To simply get three independent non-MPI
calculations you could indeed use your approach. It sounds like you ran
into some behaviour where only one of the three calculations got the stdin,
because that's a more normal thing when launching actual parallel
calculations. A tool like GNU parallel might be closer to what you need.

Mark

On Thu, 29 Oct 2015 19:11 Vedat Durmaz  wrote:

>
> after several days of trial and error, i was told only today that our
> HPC indeed has one cluster/queue (40 core nodes SMP) that does not
> require the use of aprun/mprun. so, after having compiled all the tools
> again on that cluster, i am finally able to execute many processes per
> node.
>
> (however, we were not able to remedy the other issue regarding "aprun"
> in between. nevertheless, i'm fine now.)
>
> thanks for your help guys and good evening
>
> vedat
>
>
>
> Am 29.10.2015 um 12:53 schrieb Rashmi:
> > Hi,
> >
> > As written on the website, g_mmpbsa does not directly support MPI.
> g_mmpbsa
> > does not include any code concerning OpenMP and MPI. However, We have
> tried
> > to interface MPI and OpenMP functionality of APBS by some mechanism.
> >
> > One may use g_mmpbsa with MPI by following: (1) allocate number of
> > processors through queue management system, (2) define APBS environment
> > variable (export APBS="mpirun -np 8 apbs") that includes all required
> > flags, then start g_mmpbsa directly without using mpirun (or any similar
> > program). If queue management system specifically requires aprun/mpirun
> for
> > execution of program, g_mmpbsa might not work in this case.
> >
> > One may use g_mmpbsa with OpenMP by following: (1)  allocate number of
> > threads through queue management system, (2) define OMP_NUM_THREADS
> > variable for allocated number of threads and (3) execute g_mmpbsa.
> >
> > We have not tested simultaneous use of both MPI and OpenMP, so we do not
> > know that it will work.
> >
> > Concerning standard input for g_mmpbsa, if echo or < is
> > not working. One may try using a file as following:
> >
> > ​export
> >   OMP_NUM_THREADS
> > ​=​
> > 8
> >
> > aprun -n 1 -N 1 -d 8 g_mmpbsa -f traj.xtc -s topol.tpr -n index.ndx -i
> > mmpbsa.mdp  >
> > Here, input_index contains group numbers in separate line and last line
> > should be empty.
> > $ cat input_index
> > 1
> > 13
> >
> > ​​
> >
> > Concerning, 1800 directories, you may write a shell script to automate
> job
> > submission by going into each directory, start a g_mmpbsa process (or
> > submit job script) and then move to next directory.
> >
> > Hope this information would be helpful.
> >
> >
> > Thanks.
> >
> >
> >
> > On Thu, Oct 29, 2015 at 12:01 PM, Vedat Durmaz  wrote:
> >
> >> hi again,
> >>
> >> 3 answers are hidden somewhere below ..
> >>
> >>
> >> Am 28.10.2015 um 15:45 schrieb Mark Abraham:
> >>
> >>> Hi,
> >>>
> >>> On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:
> >>>
> >>>
>  Am 27.10.2015 um 23:57 schrieb Mark Abraham:
> 
> > Hi,
> >
> >
> > On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:
> >
> > hi mark,
> >> many thanks. but can you be a little more precise? the author's only
> >> hint regarding mpi is on this site
> >> "http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html; and
> related
> >> to
> >> APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.
> >>
> >> the error i'm observing is occurring pretty much before apbs is
> >> started.
> >> to be honest, i can't see any link to my initial question ...
> >>
> >> It has the sentence "Although g_mmpbsa does not support mpirun..."
> > aprun
> >
>  is
> 
> > a form of mpirun, so I assumed you knew that what you were trying was
> > actually something that could work, which would therefore have to be
> > with
> > the APBS back end. The point of what it says there is that you don't
> run
> > g_mmpbsa with aprun, you tell it how to run APBS with aprun. This
> just
> > avoids the problem entirely because your redirected/interactive input
> >
>  goes
> 
> > to a single g_mmpbsa as normal, which then launches APBS with MPI
> >
>  support.
> 
> > Tool authors need to actively write code to be useful with MPI, so
> > unless
> > you know what you are doing is supposed to work with MPI because they
> > say
> > it works, don't try.
> >
> > Mark
> >
>  you are right. it's apbs which ought to run in parallel mode. of
> course,
>  i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set
> 'export
>  OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
>  independent g_mmpbsa processes. the problem is that i must start
>  g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).
> 

[gmx-users] Domain decomposition error

2015-10-29 Thread badamkhatan togoldor
Dear GMX Users,
I am simulating a free energy of a protein chain_A in water by parallel. Then i 
got domain decomposition error in mdrun. 
Will use 15 particle-particle and 9 PME only ranksThis is a guess, check the 
performance at the end of the log file
---Program mdrun_mpi, 
VERSION 5.1.1-dev-20150819-f10f108Source code file: 
/tmp/asillanp/gromacs/src/gromacs/domdec/domdec.cpp, line: 6969
Fatal error:There is no domain decomposition for 15 ranks that is compatible 
with the given box and a minimum cell size of 5.68559 nmChange the number of 
ranks or mdrun option -rddLook in the log file for details on the domain 
decomposition

 Then i look through the .log file, there was 24 rank . So how can i change 
this ranks? What's wrong in here? Or something wrong in my .mdp file ?  Or 
wrong construction on my script in parallel ? I am using just 2 nodes with 24 
cpu. Then i don't think my system is too small (one protein chain, solvent is 
around 8000 molecules and few ions).     
Initializing Domain Decomposition on 24 ranksDynamic load balancing: offWill 
sort the charge groups at every domain (re)decompositionInitial maximum inter 
charge-group distances:    two-body bonded interactions: 5.169 nm, LJC Pairs 
NB, atoms 81 558  multi-body bonded interactions: 0.404 nm, Ryckaert-Bell., 
atoms 521 529Minimum cell size due to bonded interactions: 5.686 nmMaximum 
distance for 13 constraints, at 120 deg. angles, all-trans: 0.218 nmEstimated 
maximum distance required for P-LINCS: 0.218 nmGuess for relative PME load: 
0.38Will use 15 particle-particle and 9 PME only ranksThis is a guess, check 
the performance at the end of the log fileUsing 9 separate PME ranks, as 
guessed by mdrunOptimizing the DD grid for 15 cells with a minimum initial size 
of 5.686 nmThe maximum allowed number of cells is: X 1 Y 1 Z 0
Can anybody help this issue? 
 Tnx Khatnaa 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Group Protein_LIG referenced in the .mdp file was not found in the index file.:Protein-Ligand MD

2015-10-29 Thread Nikhil Maroli
Dear All,
i am following protein-Ligand tutorial for my protein and ligand .
when i run grompp i got this error


Group Protein_LIG referenced in the .mdp file was not found in the index
file.
Group names must match either [moleculetype] names or custom index group
names, in which case you must supply an index file to the '-n' option
of grompp.

in my index file contain LIG entry  and in nvt.mdp also
should i change index entry to something else?or my .mdp file?
i tried by changing to Non-protein or others but it didnt work


my topol.top contain these informations

[ molecules ]
; Compound#mols
Protein_chain_A 1
LIG 1
SOL 18612
NA  6


this is the .mdp entry

; Temperature coupling
tcoupl  = V-rescale ; modified Berendsen thermostat
*tc-grps = Protein_LIG Water_and_ions *   ; two coupling groups - more
accurate
tau_t   = 0.1   0.1 ; time constant, in ps
ref_t   = 300   300 ; reference temperature, one
for each group, in K
; Pressure coupling


thanks
-- 
Ragards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple processes of a gromacs tool requiring user action at runtime on one Cray XC30 node using aprun

2015-10-29 Thread Vedat Durmaz


hi again,

3 answers are hidden somewhere below ..

Am 28.10.2015 um 15:45 schrieb Mark Abraham:

Hi,

On Wed, Oct 28, 2015 at 3:19 PM Vedat Durmaz  wrote:



Am 27.10.2015 um 23:57 schrieb Mark Abraham:

Hi,


On Tue, Oct 27, 2015 at 11:39 PM Vedat Durmaz  wrote:


hi mark,

many thanks. but can you be a little more precise? the author's only
hint regarding mpi is on this site
"http://rashmikumari.github.io/g_mmpbsa/How-to-Run.html; and related to
APBS. g_mmpbsa itself doesn't understand openmp/mpi afaik.

the error i'm observing is occurring pretty much before apbs is started.
to be honest, i can't see any link to my initial question ...


It has the sentence "Although g_mmpbsa does not support mpirun..." aprun

is

a form of mpirun, so I assumed you knew that what you were trying was
actually something that could work, which would therefore have to be with
the APBS back end. The point of what it says there is that you don't run
g_mmpbsa with aprun, you tell it how to run APBS with aprun. This just
avoids the problem entirely because your redirected/interactive input

goes

to a single g_mmpbsa as normal, which then launches APBS with MPI

support.

Tool authors need to actively write code to be useful with MPI, so unless
you know what you are doing is supposed to work with MPI because they say
it works, don't try.

Mark

you are right. it's apbs which ought to run in parallel mode. of course,
i can set the variable 'export APBS="mpirun -np 8 apbs"' [or set 'export
OMP_NUM_THREADS=8'] if i want to split a 24 cores-node to let's say 3
independent g_mmpbsa processes. the problem is that i must start
g_mmpbsa itself with aprun (in the script run_mmpbsa.sh).


No. Your job runs a shell script on your compute node. It can do anything
it likes, but it would make sense to run something in parallel at some
point. You need to build a g_mmpbsa that you can just run in a shell script
that echoes in the input (try that on its own first). Then you use the
above approach so that the single process that is g_mmpbsa does the call to
aprun (which is the cray mpirun) to run APBS in MPI mode.

It is likely that even if you run g_mmpbsa with aprun and solve the input
issue somewhow, the MPI runtime will refuse to start the child APBS with
aprun, because nesting is typically unsupported (and your current command
lines haven't given it enough information to do a good job even if it is
supported).


yes, i've encountered issues with nested aprun calls. so this will 
hardly work i guess.





i absolutely
cannot see any other way of running apbs when using it out of g_mmpbs.
hence, i need to run

aprun -n 3 -N 3 -cc 0-7:8-15:16-23 ../run_mmpbsa.sh


This likely starts three copies of g_mmpbsa each of which expect terminal
input, which maybe you can teach aprun to manage, but then each g_mmpbsa
will then do its own APBS and this is completely not what you want.


hmm, to be honest, i would say this is exactly what i'm trying to 
achieve. isn't it? i want 3 independent g_mmpbsa runs each of which 
executed in another directory with its own APBS. by the way, all 
together i have 1800 such directories each containing another trajectory.


if someone is ever (within the next 20 hours!) able to figure out a 
solution for this purpose, i would be absolutely pleased.




and of course i'm aware about having given 8 cores to g_mmpbsa, hoping
that it is able to read my input and to run apbs which hopefully uses
all of the 8 cores. the user input (choosing protein, then ligand),
however, "Cannot [be] read". this issue occurs quite early during the
g_mmpbsa process and therefore has nothing to do with the apbs (either
with openmp or mpi) functionality which is launched later.

if i simulate the whole story (spreading 24 cores of a node over 3
processes) using a bash script (instead of g_mmpbsa) which just expects
(and prints) the two inputs during runtime and which i start three times
on one node, everything works fine. i'm just asking myself whether
someone knows why gromacs fails under the same conditions and whether it
is possible to remedy that problem.


By the way, GROMACS isn't failing. You're using a separately provided
program, so you should really be talking to its authors for help. ;-)

mpirun -np 3 gmx_mpi make_ndx

would work fine (though not usefully), if you use the mechanisms provided
by mpirun to control how the redirection to the stdin of the child
processes should work. But handling that redirection is an issue between
you and the docs of your mpirun :-)

Mark


unfortunately, there is only very few information about stdin 
redirection associated with aprun. what i've done now is modifying 
g_mmpbsa such that no user input is required. starting


aprun -n 3 -N 3 -cc 0-7:8-15:16-23  ../run_mmpbsa.sh

where, using the $ALPS_APP_PE variable, i successfully enter three 
directories (dir_1, dir_2, dir_3, all containing identical file names) 
and start g_mmpbsa in each of them. now what happens 

[gmx-users] Multi-node Replica Exchange Segfault

2015-10-29 Thread Barnett, James W
Good evening here,

I get a segmentation fault with my GROMACS 5.1 install only for replica exchange
simulations right at the first successful exchange on a multi-node run. Normal
simulations across multiple nodes work fine, and replica exchange simulations on
one node work fine.

I've reproduced the problem with just 2 replicas on 2 nodes with GPU's disabled
(-nb cpu). Each node has 20 CPU's so I'm using 20 MPI ranks on each (OpenMPI).

I get a segfault right when the first exchange is successful. 

The only other error I get sometimes is that the Infiniband connection timed out
retrying the communication between nodes at the exact same moment as the
segfault, but I don't get that every time, and it's usually with all replicas
going (my goal is to do 30 replicas on 120 cpus). No other error logs, and
mdrun's log does not indicate an error.

PBS log: http://bit.ly/1P8Vs49
mdrun log: http://bit.ly/1RD0ViQ

I'm currently troubleshooting this some with the sysadmin, but I wanted to check
to see if anyone has had a similar issue or any further steps to troubleshoot.
I've also searched the mailing list and used my Google-fu, but it has failed me
so far.

Thanks for your help.

-- 
James "Wes" Barnett, Ph.D. Candidate
Louisiana Board of Regents Fellow

Chemical and Biomolecular Engineering
Tulane University
341-B Lindy Boggs Center for Energy and Biotechnology
6823 St. Charles Ave
New Orleans, Louisiana 70118-5674
jbarn...@tulane.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.