Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Alexis Michon
Hi,

thank you for your answer

the Cmdline is:
gmx mdrun  -deffnm nvt

I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
the page http://www.gromacs.org/GPU_acceleration without any succes. One
output is in nvt.log.11,

nvidia-smi returs is in  nvidia-smi.log

Nvidia-smi detect 2 GPUs, but not mdrun.

nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC

Alexis



On 01/12/16 11:34, Mark Abraham wrote:
> Hi,
>
> On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  wrote:
>
>> Hello,
>>
>> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
>> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
>> could we force mdrun to detect the second GPU ?
>>
> If it's compatible, powered and supported by your driver, then mdrun will
> find it. Presumably nvidia-smi tool will help you work out what's going on.
>
> We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
>> each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU ?
>>
> Guidance is here
> http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>
> Mark
>
>
>> Cheers,
>> Alexis
>>
>> --
>> Citation : "It’s not enough to be busy; so are the ants. The question is:
>> what are we busy about?" - Henry David Thoreau
>> Alexis MICHON, responsable informatique
>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.

-- 
Citation : "It’s not enough to be busy; so are the ants. The question is: what 
are we busy about?" - Henry David Thoreau
Alexis MICHON, responsable informatique
CNRS IBCP, 7 passage du vercors, 69007 LYON, France
Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Szilárd Páll
Alexis,

What is the problem? Have you read the relevant docs of what
CUDA_VISIBLE_DEVICES does?
https://docs.nvidia.com/cuda/cuda-c-programming-guide/#env-vars


BTW,  you can use, but you *do not need* CUDA_VISIBLE devices to
control the mapping of GPU(s) to mdrun process, the -gpu_id variable
(or equivalent env var) alone is enough.

To run multiple processes per node, do consider:
- using mdrun -multi
- using 2 or more mdrun runs per GPU

You can find the reason, motivating performance examples, and sample
commands here (in particular Fig 5 and related section):
dx.doi.org/10.1002/jcc.24030

Cheers,
--
Szilárd


On Thu, Dec 1, 2016 at 3:38 PM, Alexis Michon  wrote:
> Hi,
>
> thank you for your answer
>
> the Cmdline is:
> gmx mdrun  -deffnm nvt
>
> I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
> the page http://www.gromacs.org/GPU_acceleration without any succes. One
> output is in nvt.log.11,
>
> nvidia-smi returs is in  nvidia-smi.log
>
> Nvidia-smi detect 2 GPUs, but not mdrun.
>
> nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
> nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC
>
> Alexis
>
>
>
> On 01/12/16 11:34, Mark Abraham wrote:
>> Hi,
>>
>> On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  wrote:
>>
>>> Hello,
>>>
>>> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
>>> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
>>> could we force mdrun to detect the second GPU ?
>>>
>> If it's compatible, powered and supported by your driver, then mdrun will
>> find it. Presumably nvidia-smi tool will help you work out what's going on.
>>
>> We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
>>> each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU ?
>>>
>> Guidance is here
>> http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>>
>> Mark
>>
>>
>>> Cheers,
>>> Alexis
>>>
>>> --
>>> Citation : "It’s not enough to be busy; so are the ants. The question is:
>>> what are we busy about?" - Henry David Thoreau
>>> Alexis MICHON, responsable informatique
>>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>>
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Citation : "It’s not enough to be busy; so are the ants. The question is: 
> what are we busy about?" - Henry David Thoreau
> Alexis MICHON, responsable informatique
> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Neighbor searching artifacts in permittivity when using shift

2016-12-01 Thread Szilárd Páll
Hi,

Unless you are actually not using the Verlet scheme (or you have set
the tolerance to -1), rlist has no effect, it is not used in any way
as it is calculated automatically for the given system and settings.

Hence, your observation that
>  The permittivity was even stronger influenced by variations of rlist.
is suspicious.

Secondly, do look at the parameters recommended by Peter K. above, and
if you want to tweak the verlet buffer, I'd rather recommend using the
tolerance for that.

Cheers,
--
Szilárd


On Wed, Nov 23, 2016 at 1:19 PM, Julian Michalowsky
 wrote:
> Hi,
>
> I am using gromacs 5.1.0 with the Martini force field. When trying to
> reproduce some data regarding the 2010 Martini polarizable water model, I
> noticed that it is affected by the values of rlist and nstlist. In the
> reference work, the following parameters are used:
>
> cutoff-scheme=group
> nstlist=5
> rlist=1.2
>
> vdwtype=shift
> rvdw=1.2
> rvdw-switch=0.9
>
> coulombtype=shift
> rcoulomb=1.2
> rcoulomb-switch=0.0
>
> Along with this, Berendsen weak coupling schemes were used. I myself on the
> other hand use the v-rescale thermostat and parrinello-rahman barostat, but
> I can reproduce the reference data if I use the respective values for rlist
> and nstlist.
>
> What I did: In one set of simulations, I varied rlist=1.20, 1.22,...,1.40
> and kept nstlist=1. In another set, I set rlist=1.20 and varied
> nstlist=1,2,...,10. For each simulation, I calculated the permittivity
> using gmx dipoles. Simulations are each 100ns NPT production runs and
> sufficiently equilibrated; boxes contain 2797 polarizable water beads (and
> nothing else).
>
> What I expected: No variation in the first set of simulations, as the
> smallest value for rlist is still >= the largest interaction cutoff, and I
> update the neighbor lists every time step (nstlist=1). I expect some
> variations when changing the value of nstlist, though, and keeping
> rlist=rvdw=rcoulomb.
>
> What I found: Variations of the measured permittivity in both simulation
> sets (monotonous trends). The permittivity was even stronger influenced by
> variations of rlist.
>
> What I am wondering about is why rlist influences the data, despite rlist
>>= rvdw and nstlist=1, and if this should be the case. If so, there is some
> conceptual thing I don't quite understand about how the group neighbor list
> implementation in Gromacs works, and I kindly ask for an explanation. If
> not, then I guess there is something going on that shouldn't be, or I made
> a different mistake. In any case, I require some assistance on this one.
>
> On a sidenote: I know that the 'shift' statement is deprecated, but it
> worked for me using different Gromacs versions, including 5.0.5 and 5.1.0.
> Trying to use the potential modifier force-switch instead results in
> immediate system crashes after minimization, so they are probably not 100%
> interchangable (which is what I thought they were supposed to be).
>
> Thanks a lot in advance for your help.
>
> Kind Regards,
> Julian Michalowsky
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Szilárd Páll
Hi,

Do any other programs, e.g. CUDA SDK samples (like deviceQuery) detect
both GPUs?
I find it *very-very* unlikely that only mdrun would not be able
detect both GPUs.

Again, are you sure it's not the use of CUDA_VISIBLE_DEVICES that
creates the confusion? Please share your full command line (with env
vars an all) and resulting log output?

Cheers,
--
Szilárd


On Thu, Dec 1, 2016 at 5:10 PM, Alexis Michon  wrote:
> Hi,
>
> Thank for you reply. Yes, i have read them and know them. I use them
> will success with other software.
>
> My first and main problem is mdrun doesn't see the second GPU on the
> system. And i try to find the root cause of and the remediation.
>
> Hardware is a bi xeon with 2 Titan-xp GPU,  so my goal is to run 2
> differentes simulations with two mdrun process each with one dedicated GPU.
>
> Do you have any idea ?
>
> Cheers
> Alexis
>
> On 01/12/16 16:38, Szilárd Páll wrote:
>> Alexis,
>>
>> What is the problem? Have you read the relevant docs of what
>> CUDA_VISIBLE_DEVICES does?
>> https://docs.nvidia.com/cuda/cuda-c-programming-guide/#env-vars
>>
>>
>> BTW,  you can use, but you *do not need* CUDA_VISIBLE devices to
>> control the mapping of GPU(s) to mdrun process, the -gpu_id variable
>> (or equivalent env var) alone is enough.
>>
>> To run multiple processes per node, do consider:
>> - using mdrun -multi
>> - using 2 or more mdrun runs per GPU
>>
>> You can find the reason, motivating performance examples, and sample
>> commands here (in particular Fig 5 and related section):
>> dx.doi.org/10.1002/jcc.24030
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Thu, Dec 1, 2016 at 3:38 PM, Alexis Michon  wrote:
>>> Hi,
>>>
>>> thank you for your answer
>>>
>>> the Cmdline is:
>>> gmx mdrun  -deffnm nvt
>>>
>>> I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
>>> the page http://www.gromacs.org/GPU_acceleration without any succes. One
>>> output is in nvt.log.11,
>>>
>>> nvidia-smi returs is in  nvidia-smi.log
>>>
>>> Nvidia-smi detect 2 GPUs, but not mdrun.
>>>
>>> nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
>>> nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC
>>>
>>> Alexis
>>>
>>>
>>>
>>> On 01/12/16 11:34, Mark Abraham wrote:
 Hi,

 On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  
 wrote:

> Hello,
>
> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
> could we force mdrun to detect the second GPU ?
>
 If it's compatible, powered and supported by your driver, then mdrun will
 find it. Presumably nvidia-smi tool will help you work out what's going on.

 We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
> each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU ?
>
 Guidance is here
 http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node

 Mark


> Cheers,
> Alexis
>
> --
> Citation : "It’s not enough to be busy; so are the ants. The question is:
> what are we busy about?" - Henry David Thoreau
> Alexis MICHON, responsable informatique
> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>>> --
>>> Citation : "It’s not enough to be busy; so are the ants. The question is: 
>>> what are we busy about?" - Henry David Thoreau
>>> Alexis MICHON, responsable informatique
>>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>>
>>>
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at 
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
>>> a mail to gmx-users-requ...@gromacs.org.
>
> --
> Citation : "It’s not enough to be busy; so are the ants. The question is: 
> what are we busy about?" - 

Re: [gmx-users] Lattice distance for HII phase

2016-12-01 Thread Mohsen Ramezanpour
Dear gromacs users,

I appreciate your opinion on this email in advance.

Cheers
Mohsen

On Thu, Nov 17, 2016 at 11:07 AM, Mohsen Ramezanpour <
ramezanpour.moh...@gmail.com> wrote:

> Dear Gromacs users,
>
> I have a HII phase made of molecules. HII phase is formed of many
> cylinders parallel to z axis (in my case) in a hexagonal geometry.
>
> I am interested in calculating the distance between these cylinders. I
> want to have a more accurate statistical way to measure it. So, I think
> radial distribution function is the good choice to do so.
>
> I used this command:
>
> gmx  rdf   -s  md.tpr-f   md-whole.xtc   -n  index.ndx   -o  RDF.xvg
> -xy
>
> I chose SOL for both options g_RDF asked me for reference and calculating
> RDF.
>
> md-whole means I treated PBC with "whole" option and then did this
> analysis.
>
> The profile I got is not what I see by visualization.
>
> I think RDF will give alpha which is the unit cell size, but lattice
> distance (d_hex) can be calculated by a simple calculation.  Alpha ~ =
> (2/1.73) d_hex
>
> I am not sure if this is the right way or I should use other tools to get
> the right values for this.
>
>
> Thanks in advance for your comments
>
> Cheers
> Mohsen
>
>
>
>
>
> --
> *Rewards work better than punishment ...*
>



-- 
*Rewards work better than punishment ...*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Szilárd Páll
Hi,

Welcome!

> Thank for you reply. Yes, i have read them and know them. I use them
will success with other software.

BTW, what did you refer to here? It seems like you were focused on
figuring out the detection issue and might have ignored my other
recommendations on using multi (which incidentally would have solved
the original issue too as well as would do correct job placement for
you) and 2+ ranks per GPU?
Cheers,
--
Szilárd


On Thu, Dec 1, 2016 at 6:12 PM, Alexis Michon  wrote:
> Hi,
>
> You solve my problem, Thank you.
>
> The variable CUDA_VISIBLE_DEVICES is set up by the scheduler at the
> running time of my job. So after adding commandline "unset
> $CUDA_VISIBLE_DEVICES" in my script, mdrun detect and use both GPUs.
>
> Thank you for your patience.
>
> Cheers,
> Alexis
>
> On 01/12/16 17:53, Szilárd Páll wrote:
>> Hi,
>>
>> Do any other programs, e.g. CUDA SDK samples (like deviceQuery) detect
>> both GPUs?
>> I find it *very-very* unlikely that only mdrun would not be able
>> detect both GPUs.
>>
>> Again, are you sure it's not the use of CUDA_VISIBLE_DEVICES that
>> creates the confusion? Please share your full command line (with env
>> vars an all) and resulting log output?
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Thu, Dec 1, 2016 at 5:10 PM, Alexis Michon  wrote:
>>> Hi,
>>>
>>> Thank for you reply. Yes, i have read them and know them. I use them
>>> will success with other software.
>>>
>>> My first and main problem is mdrun doesn't see the second GPU on the
>>> system. And i try to find the root cause of and the remediation.
>>>
>>> Hardware is a bi xeon with 2 Titan-xp GPU,  so my goal is to run 2
>>> differentes simulations with two mdrun process each with one dedicated GPU.
>>>
>>> Do you have any idea ?
>>>
>>> Cheers
>>> Alexis
>>>
>>> On 01/12/16 16:38, Szilárd Páll wrote:
 Alexis,

 What is the problem? Have you read the relevant docs of what
 CUDA_VISIBLE_DEVICES does?
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/#env-vars


 BTW,  you can use, but you *do not need* CUDA_VISIBLE devices to
 control the mapping of GPU(s) to mdrun process, the -gpu_id variable
 (or equivalent env var) alone is enough.

 To run multiple processes per node, do consider:
 - using mdrun -multi
 - using 2 or more mdrun runs per GPU

 You can find the reason, motivating performance examples, and sample
 commands here (in particular Fig 5 and related section):
 dx.doi.org/10.1002/jcc.24030

 Cheers,
 --
 Szilárd


 On Thu, Dec 1, 2016 at 3:38 PM, Alexis Michon  
 wrote:
> Hi,
>
> thank you for your answer
>
> the Cmdline is:
> gmx mdrun  -deffnm nvt
>
> I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
> the page http://www.gromacs.org/GPU_acceleration without any succes. One
> output is in nvt.log.11,
>
> nvidia-smi returs is in  nvidia-smi.log
>
> Nvidia-smi detect 2 GPUs, but not mdrun.
>
> nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
> nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC
>
> Alexis
>
>
>
> On 01/12/16 11:34, Mark Abraham wrote:
>> Hi,
>>
>> On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  
>> wrote:
>>
>>> Hello,
>>>
>>> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
>>> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
>>> could we force mdrun to detect the second GPU ?
>>>
>> If it's compatible, powered and supported by your driver, then mdrun will
>> find it. Presumably nvidia-smi tool will help you work out what's going 
>> on.
>>
>> We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
>>> each mdrun will use 1GPU. How could we tell  mdrun to use a specific 
>>> GPU ?
>>>
>> Guidance is here
>> http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>>
>> Mark
>>
>>
>>> Cheers,
>>> Alexis
>>>
>>> --
>>> Citation : "It’s not enough to be busy; so are the ants. The question 
>>> is:
>>> what are we busy about?" - Henry David Thoreau
>>> Alexis MICHON, responsable informatique
>>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>>
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read 

[gmx-users] Computer processor and graphics card choice

2016-12-01 Thread Guillem Prats Ejarque
Dear colleagues,


I want to buy a computer to perform molecular dynamics by GROMACS, and I have 
several questions.


First of all, I am having doubts between the Intel Xeon E5-2620v4 (8 cores, 
2.1-3.0 GHz) and the i7-5820K (6 cores, 3.3-3.6 GHz). The reason of my doubts 
is the significantly lower clock rate of the Xeon processor. First I decided 
i7-5820k, but, although E5-2620 is at the limit of my budget, I would buy it 
only if the performance were significantly better.


Moreover, after the release of the new 10xx NVIDIA series, I also have doubts 
about the graphics card. First I wanted to buy the GTX 970 (1664 CUDA cores; 
1050-1178 MHz clock; 224 GB/s Memory bandwidth; 3494 SP Gflop/s), but looking 
at the new GTX 1060 - 6 GB (1280 CUDA cores; 1506-1708 MHz clock; 192 GB/s 
Memory bandwidth; 3855 SP Gflop/s) which is in the same price range,  it seemed 
to me that it could have higher performance with the last one. Would GTX 1060 
be better than GTX 970? Does anyone know if the 10xx series of NVIDA works well 
with GROMACS?


Thanks in advance,


Guillem
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] order parameters for Inverted hexagonal phase

2016-12-01 Thread Mohsen Ramezanpour
Dear Gromacs Users,

I have a question on how to use g_order to get the order parameters
correctly for an inverted hexagonal (HII) phase and compare it with
experimental values from NMR.
I read some articles, including following one:

Vermeer, Louic S., et al. "Acyl chain order parameter profiles in
phospholipid bilayers: computation from molecular dynamics simulations and
comparison with 2H NMR experiments." *European Biophysics Journal* 36.8
(2007): 919-931.

But, still, I am not quire sure if I understood it correctly.

*In NMR experiments*, there is a direction for the magnetic field which is
applied to the sample (say Z_exp). This Z_exp is less likely to be exatly
parallel to and will make an angle with the normal vector (n_exp) to the
bilayer, or to the cylindrical axis (C_exp) of HII phase in experiment. (I
simplified the situation, hope it does not affect the following argument)



*In our simulations 1) For bilayer case:*
Given a bilayer in XY plane, we calculate the order parameters values
regarding to the normal vector (n_sim) to the bilayer, i.e. the Z axis
(Z_sim).

How can we be sure this Z_sim is the same with Z_exp? If these two are not
close/same, comparison does not make too much sense. Does it?


*2) For HII phase case:*
Given a HII phase with cylinders parallel to the Z_sim axis, What is the
situation for calculating the order parameters? Shall we take Z_sim or the
radial vector? By radial vector, I mean the vector pointing to the cylinder
axis. This radial vector will be locally normal to the lipid cylinder at
each point? Seems like we are doing it like bilayer.

Here again, the problem with weather or not Z_sim=Z_exp exist.

Hopefully I could explain the problem clearly.

Thanks in advance for your comments.
Mohsen


-- 
*Rewards work better than punishment ...*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Computer processor and graphics card choice

2016-12-01 Thread Andrew Guy
Hi Guillem,

I can answer your last question - the 10xx series does indeed work well
with GROMACS (currently running GROMACS 2016 with a GTX1070), although I
would be inclined to get the GTX1070 if your budget can stretch to it.

Andrew

On Fri, Dec 2, 2016 at 6:15 AM, Guillem Prats Ejarque <
guillem.prats.ejar...@uab.cat> wrote:

> Dear colleagues,
>
>
> I want to buy a computer to perform molecular dynamics by GROMACS, and I
> have several questions.
>
>
> First of all, I am having doubts between the Intel Xeon E5-2620v4 (8
> cores, 2.1-3.0 GHz) and the i7-5820K (6 cores, 3.3-3.6 GHz). The reason of
> my doubts is the significantly lower clock rate of the Xeon processor.
> First I decided i7-5820k, but, although E5-2620 is at the limit of my
> budget, I would buy it only if the performance were significantly better.
>
>
> Moreover, after the release of the new 10xx NVIDIA series, I also have
> doubts about the graphics card. First I wanted to buy the GTX 970 (1664
> CUDA cores; 1050-1178 MHz clock; 224 GB/s Memory bandwidth; 3494 SP
> Gflop/s), but looking at the new GTX 1060 - 6 GB (1280 CUDA cores;
> 1506-1708 MHz clock; 192 GB/s Memory bandwidth; 3855 SP Gflop/s) which is
> in the same price range,  it seemed to me that it could have higher
> performance with the last one. Would GTX 1060 be better than GTX 970? Does
> anyone know if the 10xx series of NVIDA works well with GROMACS?
>
>
> Thanks in advance,
>
>
> Guillem
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hbond

2016-12-01 Thread Gregory Poon



On 12/1/2016 7:45 AM, Justin Lemkul wrote:



On 12/1/16 7:24 AM, Gregory Man Kai Poon wrote:

Hi all:


I am trying to use hbond in Gromacs 5.1.4 to analyze water-mediated 
hydrogen
bonding between two objects simulated in water.  The GROMACS manual 
discusses
this in a Figure (9.8) - "water insertion". There is nothing in the 
online
documentation as to how this should be done except a single mention 
with the

-hbm option, which I tried.  It generated .xpm files such as the one
attached.  They open, as far as I can tell, a very vertically 
compressed plot

which I can make nothing out of.  Attempts to convert them to eps using
xpm2eps output similar results.



You can use an .m2p file to adjust the sizes of the x- and y-axes to 
make it legible.  The real value is in the data within, though.  You 
have to map the actual participating groups (the output of of -hbn) 
with the individual time series in the .xpm from -hbm.




So my questions are two-fold: 1) What is happening with the .xpm 
files?  2)

Am I using the correct hbond option to enumerate water-mediated hydrogen
bonds?



To actually analyze water-mediated H-bonds requires additional work 
that GROMACS tools don't do.  You need to analyze water H-bonds with 
the two groups of interest separately, then determine if the same 
water is H-bonded to a moiety in both of those groups in the same 
frame.  This is where tracing the H-bonds in the .xpm file is useful.


-Justin



Thanks for your comments, Justin.  When I invoke hbond by:

gmx hbond -f *.xtc -s *.tpr -num -hbn -g (± other options)

I get hbnum which shows changes with respect with time, but the 
hbond.ndx and hbond.log files have no time information.  Are they 
averages of some sort?  Or a particular frame that it defaults to? If 
so, how do I specific the frames/times?


Thanks again,
Gregory
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Alexis Michon
Hi,

Thank for you reply. Yes, i have read them and know them. I use them
will success with other software.

My first and main problem is mdrun doesn't see the second GPU on the
system. And i try to find the root cause of and the remediation.

Hardware is a bi xeon with 2 Titan-xp GPU,  so my goal is to run 2
differentes simulations with two mdrun process each with one dedicated GPU.

Do you have any idea ?

Cheers
Alexis

On 01/12/16 16:38, Szilárd Páll wrote:
> Alexis,
>
> What is the problem? Have you read the relevant docs of what
> CUDA_VISIBLE_DEVICES does?
> https://docs.nvidia.com/cuda/cuda-c-programming-guide/#env-vars
>
>
> BTW,  you can use, but you *do not need* CUDA_VISIBLE devices to
> control the mapping of GPU(s) to mdrun process, the -gpu_id variable
> (or equivalent env var) alone is enough.
>
> To run multiple processes per node, do consider:
> - using mdrun -multi
> - using 2 or more mdrun runs per GPU
>
> You can find the reason, motivating performance examples, and sample
> commands here (in particular Fig 5 and related section):
> dx.doi.org/10.1002/jcc.24030
>
> Cheers,
> --
> Szilárd
>
>
> On Thu, Dec 1, 2016 at 3:38 PM, Alexis Michon  wrote:
>> Hi,
>>
>> thank you for your answer
>>
>> the Cmdline is:
>> gmx mdrun  -deffnm nvt
>>
>> I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
>> the page http://www.gromacs.org/GPU_acceleration without any succes. One
>> output is in nvt.log.11,
>>
>> nvidia-smi returs is in  nvidia-smi.log
>>
>> Nvidia-smi detect 2 GPUs, but not mdrun.
>>
>> nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
>> nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC
>>
>> Alexis
>>
>>
>>
>> On 01/12/16 11:34, Mark Abraham wrote:
>>> Hi,
>>>
>>> On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  wrote:
>>>
 Hello,

 We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
 processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
 could we force mdrun to detect the second GPU ?

>>> If it's compatible, powered and supported by your driver, then mdrun will
>>> find it. Presumably nvidia-smi tool will help you work out what's going on.
>>>
>>> We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
 each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU ?

>>> Guidance is here
>>> http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>>>
>>> Mark
>>>
>>>
 Cheers,
 Alexis

 --
 Citation : "It’s not enough to be busy; so are the ants. The question is:
 what are we busy about?" - Henry David Thoreau
 Alexis MICHON, responsable informatique
 CNRS IBCP, 7 passage du vercors, 69007 LYON, France
 Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
 CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
 Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34


 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
>> --
>> Citation : "It’s not enough to be busy; so are the ants. The question is: 
>> what are we busy about?" - Henry David Thoreau
>> Alexis MICHON, responsable informatique
>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>
>>
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.

-- 
Citation : "It’s not enough to be busy; so are the ants. The question is: what 
are we busy about?" - Henry David Thoreau
Alexis MICHON, responsable informatique
CNRS IBCP, 7 passage du vercors, 69007 LYON, France
Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users 

Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Alexis Michon
Hi,

You solve my problem, Thank you.

The variable CUDA_VISIBLE_DEVICES is set up by the scheduler at the
running time of my job. So after adding commandline "unset
$CUDA_VISIBLE_DEVICES" in my script, mdrun detect and use both GPUs.

Thank you for your patience.

Cheers,
Alexis

On 01/12/16 17:53, Szilárd Páll wrote:
> Hi,
>
> Do any other programs, e.g. CUDA SDK samples (like deviceQuery) detect
> both GPUs?
> I find it *very-very* unlikely that only mdrun would not be able
> detect both GPUs.
>
> Again, are you sure it's not the use of CUDA_VISIBLE_DEVICES that
> creates the confusion? Please share your full command line (with env
> vars an all) and resulting log output?
>
> Cheers,
> --
> Szilárd
>
>
> On Thu, Dec 1, 2016 at 5:10 PM, Alexis Michon  wrote:
>> Hi,
>>
>> Thank for you reply. Yes, i have read them and know them. I use them
>> will success with other software.
>>
>> My first and main problem is mdrun doesn't see the second GPU on the
>> system. And i try to find the root cause of and the remediation.
>>
>> Hardware is a bi xeon with 2 Titan-xp GPU,  so my goal is to run 2
>> differentes simulations with two mdrun process each with one dedicated GPU.
>>
>> Do you have any idea ?
>>
>> Cheers
>> Alexis
>>
>> On 01/12/16 16:38, Szilárd Páll wrote:
>>> Alexis,
>>>
>>> What is the problem? Have you read the relevant docs of what
>>> CUDA_VISIBLE_DEVICES does?
>>> https://docs.nvidia.com/cuda/cuda-c-programming-guide/#env-vars
>>>
>>>
>>> BTW,  you can use, but you *do not need* CUDA_VISIBLE devices to
>>> control the mapping of GPU(s) to mdrun process, the -gpu_id variable
>>> (or equivalent env var) alone is enough.
>>>
>>> To run multiple processes per node, do consider:
>>> - using mdrun -multi
>>> - using 2 or more mdrun runs per GPU
>>>
>>> You can find the reason, motivating performance examples, and sample
>>> commands here (in particular Fig 5 and related section):
>>> dx.doi.org/10.1002/jcc.24030
>>>
>>> Cheers,
>>> --
>>> Szilárd
>>>
>>>
>>> On Thu, Dec 1, 2016 at 3:38 PM, Alexis Michon  wrote:
 Hi,

 thank you for your answer

 the Cmdline is:
 gmx mdrun  -deffnm nvt

 I played a bit with option -gpu_id, use $CUDA_VISIBLE_DEVICES  and read
 the page http://www.gromacs.org/GPU_acceleration without any succes. One
 output is in nvt.log.11,

 nvidia-smi returs is in  nvidia-smi.log

 Nvidia-smi detect 2 GPUs, but not mdrun.

 nvidia-smi.log : https://icloud.ibcp.fr/index.php/s/08jqFsZpWNlSUGh
 nvt.log : https://icloud.ibcp.fr/index.php/s/vC5UxCCAhyw9FuC

 Alexis



 On 01/12/16 11:34, Mark Abraham wrote:
> Hi,
>
> On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  
> wrote:
>
>> Hello,
>>
>> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
>> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
>> could we force mdrun to detect the second GPU ?
>>
> If it's compatible, powered and supported by your driver, then mdrun will
> find it. Presumably nvidia-smi tool will help you work out what's going 
> on.
>
> We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
>> each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU 
>> ?
>>
> Guidance is here
> http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>
> Mark
>
>
>> Cheers,
>> Alexis
>>
>> --
>> Citation : "It’s not enough to be busy; so are the ants. The question is:
>> what are we busy about?" - Henry David Thoreau
>> Alexis MICHON, responsable informatique
>> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
>> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
>> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
>> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>>
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
 --
 Citation : "It’s not enough to be busy; so are the ants. The question is: 
 what are we busy about?" - Henry David Thoreau
 Alexis MICHON, responsable informatique
 CNRS IBCP, 7 passage du vercors, 69007 LYON, France
 Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
 CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
 Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34



 --
 Gromacs Users 

[gmx-users] system blowing up

2016-12-01 Thread abhisek Mondal
Hi,

I'm constantly getting a strange error of system blow up during
equilibration procedure. I'm relatively new to gromacs, please help me to
get my things straight.

I have a ligand bound protein structure in solvated condition.
So, I was using:
gmx mdrun -v -deffnm em #for minimization

Upon successful completion, I try to equilibrate using:
gmx mdrun -deffnm npt

But this command is failing with a lot of LINCS errors.
Surprising thing is that, I'm running the same code in 2 clusters
simultaneously. In one cluster it is running fine but in other cluster it
is giving me LINCS error.

Can you please suggest what actually is going wrong.





-- 
Abhisek Mondal

*Research Fellow*

*Structural Biology and Bioinformatics Division*
*CSIR-Indian Institute of Chemical Biology*

*Kolkata 700032*

*INDIA*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem in PMF simulation

2016-12-01 Thread Billy Williams-Noonan
Thanks :)

On 1 December 2016 at 23:40, Justin Lemkul  wrote:

>
>
> On 12/1/16 4:53 AM, Billy Williams-Noonan wrote:
>
>> Also, you may want to check that your rcoulomb and rvdw values are
>> correct...  Didn't look at your mdp files until just now.  Why are they
>> set
>> to zero?
>>
>>
> Vacuum simulations are done with infinite cutoffs (rlist=rcoulomb=rvdw=0)
> and no PBC.
>
> -Justin
>
> On 29 November 2016 at 16:56, Billy Williams-Noonan <
>> billy.williams-noo...@monash.edu> wrote:
>>
>> My PMF curves look similar to that.  What you're seeing towards the end of
>>> that curve is either an artifact of the algorithms used to model the
>>> system, a very minor sampling problem, or a mix of both.  Your Gaussians
>>> may be split because they are bi-modal and the most energetically
>>> favourable position around your constraint is in two places.  Try having
>>> a
>>> look at your individual umbrella windows to figure out what's going on.
>>> :)
>>>
>>> Cheers,
>>>
>>> Billy
>>>
>>> On 29 November 2016 at 15:20, Shi Li  wrote:
>>>
>>> Dear Gromacs users,

 I posted this question a few days ago, but still couldn’t solve it. This
 problem has bothered my for a long time, so I am reposting this for
 help.

 I am currently doing some PMF simulations for two small molecular types
 in vacuum. I applied an energy minimization, a NVT equilibrium and a
 production run on my system. Following are the mdp files for each. I
 used a
 commend in my run script to change the Window in pull_coord1_init in the
 pull code for each sample window.

 Energy minimization
 ---
 integrator   = steep
 nsteps   = 2

 nstenergy= 500
 nstlog   = 500
 nstxout-compressed   = 1000

 cutoff-scheme= group

 coulombtype  = Cut-off
 rcoulomb = 0

 vdwtype  = Cut-off
 rvdw = 0
 rlist= 0
 pbc  = no

 ; Pull code
 pull = umbrella
 pull_geometry= distance
 pull_start   = no
 pull-ncoords = 1
 pull_group1_name = ADT1
 pull_group2_name = ADT2
 pull-coord1-groups = 1 2
 pull_coord1_init = WINDOW
 pull_coord1_rate = 0.0
 pull_coord1_k = 5000

 —
 NVT
 
 integrator   = md
 dt   = 0.002 ; 2 fs
 nsteps   = 50; 1.0 ns

 nstenergy= 200
 nstlog   = 2000
 nstxout-compressed   = 1

 continuation = yes
 constraint-algorithm = lincs
 constraints  = h-bonds

 cutoff-scheme= group
 rlist= 0

 rcoulomb = 0

 rvdw = 0
 pbc  = no
 tcoupl   = V-rescale
 tc-grps  = System
 tau-t= 2.0
 ref-t= 298.15
 nhchainlength= 1

 ; Pull code
 pull = umbrella
 pull_geometry= distance
 ;pull_dim = N N Y
 pull_start   = no
 pull-ncoords = 1
 pull_ngroups = 2
 pull_group1_name = ADT1
 pull_group2_name = ADT2
 pull-coord1-groups = 1 2
 pull_coord1_init = WINDOW
 pull_coord1_rate = 0.0
 pull_coord1_k = 5000

 ——
 Production run:
 
 integrator   = md
 dt   = 0.002   ; 2 fs
 nsteps   = 1000   ; 20.0 ns

 nstenergy= 5000
 nstlog   = 5000
 nstxout-compressed   = 5000

 continuation = yes
 constraint-algorithm = lincs
 constraints  = h-bonds

 cutoff-scheme= Group

 rcoulomb = 0
 rlist= 0

 rvdw = 0
 pbc  = no

 tcoupl   = V-rescale; Nose-Hoover
 tc-grps  = System
 tau-t= 2.0
 ref-t= 298.15
 nhchainlength= 1

 ; Pull code
 pull = umbrella
 pull_geometry= distance
 pull_start   = no
 pull-ncoords = 1
 pull_ngroups = 2
 pull_group1_name = ADT1
 pull_group2_name = ADT2
 pull-coord1-groups = 1 2
 pull_coord1_init = WINDOW
 pull_coord1_rate = 0.0
 pull_coord1_k = 5000

 ———
 I have a very strange profile and histogram. I uploaded to these links.

 PMF: https://www.dropbox.com/s/i10hn1p30w2j71r/F-anti-adt.eps?dl=0
 HISTO: https://www.dropbox.com/s/1xi9i4cj60g97n2/F-anti-histo.eps?dl=0


[gmx-users] COM group of Membrane and Protein simulation

2016-12-01 Thread Mijiddorj Batsaikhan
Dear gmx_users,

I started simulation that a peptide on membrane. My peptide locates on the
membrane surface. I have two questions relating to the simulation.
(1)
When I start the simulation, I chose COM groups separately. Is this choice
okay? or May I need to chose COM group inseparately?

(2)
During the simulation peptide is moving the edge of membrane. How can I
shift the peptide to the central section of the membrane? Can I use
-nojump, -center options of trjconv tool?


Best regards,

Mijiddorj
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] parallel processing

2016-12-01 Thread abhisek Mondal
Hi,

I'm running gromacs on a cluster configuration as follows:
1 node = 16 cores

I'm able to use single node with "gmx mdrun -ntmpi 4 -ntomp 16 -npme 0 -v
-deffnm em" command.

How can I be able to run on multiple node (I have 20 nodes available) ?
"-nt" is not doing good here.



-- 
Abhisek Mondal

*Research Fellow*

*Structural Biology and Bioinformatics Division*
*CSIR-Indian Institute of Chemical Biology*

*Kolkata 700032*

*INDIA*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Desired spacing in umbrella sampling

2016-12-01 Thread faride badalkhani
Dear Gromacs users,

Is it reasonable to choose asymmetric windows in umbrella sampling? I want
to use 0.1 nm spacing for 0.6-2.6 COM distance and then 0.2 nm for 2.6-5.3.

Regards,
Farideh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] (no subject)

2016-12-01 Thread Gregory Man Kai Poon
Hi all:


I am trying to use hbond in Gromacs 5.1.4 to analyze water-mediated hydrogen 
bonding between two objects simulated in water.  The GROMACS manual discusses 
this in a Figure (9.8) - "water insertion". There is nothing in the online 
documentation as to how this should be done except a single mention with the 
-hbm option, which I tried.  It generated .xpm files such as the one attached.  
They open, as far as I can tell, a very vertically compressed plot which I can 
make nothing out of.  Attempts to convert them to eps using xpm2eps output 
similar results.


So my questions are two-fold: 1) What is happening with the .xpm files?  2) Am 
I using the correct hbond option to enumerate water-mediated hydrogen bonds?


Many thanks in advance,

Gregory


https://www.dropbox.com/s/2vj2mxmb0f0jnyq/hbmap.xpm?dl=0


https://www.dropbox.com/s/w0gb4x0frwwm668/plot.eps?dl=0

[https://www.dropbox.com/temp_thumb_from_token/s/w0gb4x0frwwm668?preserve_transparency=False_mode=2=1024x1024]

plot.eps
www.dropbox.com
Shared with Dropbox



[https://cf.dropboxstatic.com/static/images/icons128/page_white.png]

hbmap.xpm
www.dropbox.com
Shared with Dropbox



-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem in PMF simulation

2016-12-01 Thread Justin Lemkul



On 12/1/16 4:53 AM, Billy Williams-Noonan wrote:

Also, you may want to check that your rcoulomb and rvdw values are
correct...  Didn't look at your mdp files until just now.  Why are they set
to zero?



Vacuum simulations are done with infinite cutoffs (rlist=rcoulomb=rvdw=0) and no 
PBC.


-Justin


On 29 November 2016 at 16:56, Billy Williams-Noonan <
billy.williams-noo...@monash.edu> wrote:


My PMF curves look similar to that.  What you're seeing towards the end of
that curve is either an artifact of the algorithms used to model the
system, a very minor sampling problem, or a mix of both.  Your Gaussians
may be split because they are bi-modal and the most energetically
favourable position around your constraint is in two places.  Try having a
look at your individual umbrella windows to figure out what's going on. :)

Cheers,

Billy

On 29 November 2016 at 15:20, Shi Li  wrote:


Dear Gromacs users,

I posted this question a few days ago, but still couldn’t solve it. This
problem has bothered my for a long time, so I am reposting this for help.

I am currently doing some PMF simulations for two small molecular types
in vacuum. I applied an energy minimization, a NVT equilibrium and a
production run on my system. Following are the mdp files for each. I used a
commend in my run script to change the Window in pull_coord1_init in the
pull code for each sample window.

Energy minimization
---
integrator   = steep
nsteps   = 2

nstenergy= 500
nstlog   = 500
nstxout-compressed   = 1000

cutoff-scheme= group

coulombtype  = Cut-off
rcoulomb = 0

vdwtype  = Cut-off
rvdw = 0
rlist= 0
pbc  = no

; Pull code
pull = umbrella
pull_geometry= distance
pull_start   = no
pull-ncoords = 1
pull_group1_name = ADT1
pull_group2_name = ADT2
pull-coord1-groups = 1 2
pull_coord1_init = WINDOW
pull_coord1_rate = 0.0
pull_coord1_k = 5000

—
NVT

integrator   = md
dt   = 0.002 ; 2 fs
nsteps   = 50; 1.0 ns

nstenergy= 200
nstlog   = 2000
nstxout-compressed   = 1

continuation = yes
constraint-algorithm = lincs
constraints  = h-bonds

cutoff-scheme= group
rlist= 0

rcoulomb = 0

rvdw = 0
pbc  = no
tcoupl   = V-rescale
tc-grps  = System
tau-t= 2.0
ref-t= 298.15
nhchainlength= 1

; Pull code
pull = umbrella
pull_geometry= distance
;pull_dim = N N Y
pull_start   = no
pull-ncoords = 1
pull_ngroups = 2
pull_group1_name = ADT1
pull_group2_name = ADT2
pull-coord1-groups = 1 2
pull_coord1_init = WINDOW
pull_coord1_rate = 0.0
pull_coord1_k = 5000

——
Production run:

integrator   = md
dt   = 0.002   ; 2 fs
nsteps   = 1000   ; 20.0 ns

nstenergy= 5000
nstlog   = 5000
nstxout-compressed   = 5000

continuation = yes
constraint-algorithm = lincs
constraints  = h-bonds

cutoff-scheme= Group

rcoulomb = 0
rlist= 0

rvdw = 0
pbc  = no

tcoupl   = V-rescale; Nose-Hoover
tc-grps  = System
tau-t= 2.0
ref-t= 298.15
nhchainlength= 1

; Pull code
pull = umbrella
pull_geometry= distance
pull_start   = no
pull-ncoords = 1
pull_ngroups = 2
pull_group1_name = ADT1
pull_group2_name = ADT2
pull-coord1-groups = 1 2
pull_coord1_init = WINDOW
pull_coord1_rate = 0.0
pull_coord1_k = 5000

———
I have a very strange profile and histogram. I uploaded to these links.

PMF: https://www.dropbox.com/s/i10hn1p30w2j71r/F-anti-adt.eps?dl=0
HISTO: https://www.dropbox.com/s/1xi9i4cj60g97n2/F-anti-histo.eps?dl=0

The molecule is about 1.5 nm long, so the smooth profile before 1.5 nm
looks fine, but I don’t understand why the profile had this fluctuation
after 1.5 nm and the histogram seems to have some split peaks after this
length. Can anyone tell me why this happened? Is there something wrong in
my mdp file or something wrong in my topology? I have used this similar mdp
files for my other types of molecular systems and the results were fine. So
it is really confused me now.

I appreciate for any help! Thanks!

Shi



--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] Problem in PMF simulation

2016-12-01 Thread Billy Williams-Noonan
Also, you may want to check that your rcoulomb and rvdw values are
correct...  Didn't look at your mdp files until just now.  Why are they set
to zero?

On 29 November 2016 at 16:56, Billy Williams-Noonan <
billy.williams-noo...@monash.edu> wrote:

> My PMF curves look similar to that.  What you're seeing towards the end of
> that curve is either an artifact of the algorithms used to model the
> system, a very minor sampling problem, or a mix of both.  Your Gaussians
> may be split because they are bi-modal and the most energetically
> favourable position around your constraint is in two places.  Try having a
> look at your individual umbrella windows to figure out what's going on. :)
>
> Cheers,
>
> Billy
>
> On 29 November 2016 at 15:20, Shi Li  wrote:
>
>> Dear Gromacs users,
>>
>> I posted this question a few days ago, but still couldn’t solve it. This
>> problem has bothered my for a long time, so I am reposting this for help.
>>
>> I am currently doing some PMF simulations for two small molecular types
>> in vacuum. I applied an energy minimization, a NVT equilibrium and a
>> production run on my system. Following are the mdp files for each. I used a
>> commend in my run script to change the Window in pull_coord1_init in the
>> pull code for each sample window.
>>
>> Energy minimization
>> ---
>> integrator   = steep
>> nsteps   = 2
>>
>> nstenergy= 500
>> nstlog   = 500
>> nstxout-compressed   = 1000
>>
>> cutoff-scheme= group
>>
>> coulombtype  = Cut-off
>> rcoulomb = 0
>>
>> vdwtype  = Cut-off
>> rvdw = 0
>> rlist= 0
>> pbc  = no
>>
>> ; Pull code
>> pull = umbrella
>> pull_geometry= distance
>> pull_start   = no
>> pull-ncoords = 1
>> pull_group1_name = ADT1
>> pull_group2_name = ADT2
>> pull-coord1-groups = 1 2
>> pull_coord1_init = WINDOW
>> pull_coord1_rate = 0.0
>> pull_coord1_k = 5000
>>
>> —
>> NVT
>> 
>> integrator   = md
>> dt   = 0.002 ; 2 fs
>> nsteps   = 50; 1.0 ns
>>
>> nstenergy= 200
>> nstlog   = 2000
>> nstxout-compressed   = 1
>>
>> continuation = yes
>> constraint-algorithm = lincs
>> constraints  = h-bonds
>>
>> cutoff-scheme= group
>> rlist= 0
>>
>> rcoulomb = 0
>>
>> rvdw = 0
>> pbc  = no
>> tcoupl   = V-rescale
>> tc-grps  = System
>> tau-t= 2.0
>> ref-t= 298.15
>> nhchainlength= 1
>>
>> ; Pull code
>> pull = umbrella
>> pull_geometry= distance
>> ;pull_dim = N N Y
>> pull_start   = no
>> pull-ncoords = 1
>> pull_ngroups = 2
>> pull_group1_name = ADT1
>> pull_group2_name = ADT2
>> pull-coord1-groups = 1 2
>> pull_coord1_init = WINDOW
>> pull_coord1_rate = 0.0
>> pull_coord1_k = 5000
>>
>> ——
>> Production run:
>> 
>> integrator   = md
>> dt   = 0.002   ; 2 fs
>> nsteps   = 1000   ; 20.0 ns
>>
>> nstenergy= 5000
>> nstlog   = 5000
>> nstxout-compressed   = 5000
>>
>> continuation = yes
>> constraint-algorithm = lincs
>> constraints  = h-bonds
>>
>> cutoff-scheme= Group
>>
>> rcoulomb = 0
>> rlist= 0
>>
>> rvdw = 0
>> pbc  = no
>>
>> tcoupl   = V-rescale; Nose-Hoover
>> tc-grps  = System
>> tau-t= 2.0
>> ref-t= 298.15
>> nhchainlength= 1
>>
>> ; Pull code
>> pull = umbrella
>> pull_geometry= distance
>> pull_start   = no
>> pull-ncoords = 1
>> pull_ngroups = 2
>> pull_group1_name = ADT1
>> pull_group2_name = ADT2
>> pull-coord1-groups = 1 2
>> pull_coord1_init = WINDOW
>> pull_coord1_rate = 0.0
>> pull_coord1_k = 5000
>>
>> ———
>> I have a very strange profile and histogram. I uploaded to these links.
>>
>> PMF: https://www.dropbox.com/s/i10hn1p30w2j71r/F-anti-adt.eps?dl=0
>> HISTO: https://www.dropbox.com/s/1xi9i4cj60g97n2/F-anti-histo.eps?dl=0
>>
>> The molecule is about 1.5 nm long, so the smooth profile before 1.5 nm
>> looks fine, but I don’t understand why the profile had this fluctuation
>> after 1.5 nm and the histogram seems to have some split peaks after this
>> length. Can anyone tell me why this happened? Is there something wrong in
>> my mdp file or something wrong in my topology? I have used this similar mdp
>> files for my other types of molecular systems and the results were fine. So
>> it is really confused me now.
>>
>> I appreciate for any help! Thanks!
>>
>> Shi
>>
>>
>>
>> --
>> Gromacs 

[gmx-users] bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary file

2016-12-01 Thread Andrew Bostick
Dear gromacs users,

export SOFT=$HOME/installation
export CPPFLAGS="-I$SOFT/include"
export LDFLAGS="-L$SOFT/lib"
export PATH="$PATH":$SOFT/bin

tar xvf cmake-3.6.1.tar.gz
cd ../cmake-3.6.1.
./configure --prefix=$SOFT

make
make install



tar xvzf gromacs-5.1.3.tar.gz

cd ../gromacs-5.1.3

mkdir build

cd build

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON

-DGMX_MPI=ON -DGMX_GPU=ON -DCMAKE_INSTALL_PRIFFIX=$SOFT

make

make check

make install

source /home/linux/installation/gromacs/bin/GMXRC

*** The GMXRC file is as follows:

# This is a convenience script to determine which

# type of shell you have, and then run GMXRC.[csh|bash|zsh]

# from the Gromacs binary directory.

# If you only use one shell you can copy that GMXRC.* instead.

# only csh/tcsh set the variable $shell (note: lower case!)

# but check for the contents to be sure, since some environments may

# set it also for other shells

echo $shell | grep -q csh && goto CSH

# if we got here, shell is bsh/bash/zsh/ksh

. /home/linux/installation/gromacs/bin/GMXRC.bash

return

# csh/tcsh jump here

CSH:

source /home/linux/installation/gromacs/bin/GMXRC.csh

~


~


~


~


~


"GMXRC" 18L, 594C

I use following commands:

source /home/linux/installation/gromacs/bin/GMXRC

gmx_mpi

But, I encountered with:

bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary file


In /home/linux/installation/gromacs/bin, I checked executablity of gmx_mpi file:


-rwxr-xr-x 1 linux linux3378 2016-07-13 05:56 demux.pl*

-rwxr-xr-x 1 linux linux149700 2016-07-14 16:12 gmx-completion.bash*

-rwxr-xr-x 1 linux linux41 2016-09-04 04:44 gmx-completion-gmx_mpi.bash*

-rwxr-xr-x 1 linux linux275213 2016-09-04 04:44 gmx_mpi*

-rwxr-xr-x 1 linux linux594 2016-11-29 03:42 GMXRC*

-rwxr-xr-x 1 linux linux2758 2016-11-29 03:43 GMXRC.bash*

-rwxr-xr-x 1 linux linux2995 2016-09-04 02:53 GMXRC.csh*

-rwxr-xr-x 1 linux linux118 2016-09-04 02:53 GMXRC.zsh*

-rwxr-xr-x 1 linux linux8217 2016-07-13 05:56 xplor2gmx.pl*


I used chmod 777 *.*. After that

-rwxrwxrwx 1 linux linux   3378 2016-07-13 05:56 demux.pl*

-rwxrwxrwx 1 linux linux 149700 2016-07-14 16:12 gmx-completion.bash*

-rwxrwxrwx 1 linux linux 41 2016-09-04 04:44
gmx-completion-gmx_mpi.bash*

-rwxrwxrwx 1 linux linux 275213 2016-09-04 04:44 gmx_mpi*

-rwxr-xr-x 1 linux linux594 2016-11-29 03:42 GMXRC*

-rwxrwxrwx 1 linux linux   2758 2016-11-29 03:43 GMXRC.bash*

-rwxrwxrwx 1 linux linux   2995 2016-09-04 02:53 GMXRC.csh*

-rwxrwxrwx 1 linux linux118 2016-09-04 02:53 GMXRC.zsh*

-rwxrwxrwx 1 linux linux   8217 2016-07-13 05:56 xplor2gmx.pl*


After using the following commands:

source /home/linux/installation/gromacs/bin/GMXRC

gmx_mpi

Again, I encountered with:

bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary
file

Any help will be highly appreciated.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gomacs and multiple GPU

2016-12-01 Thread Mark Abraham
Hi,

On Wed, Nov 30, 2016 at 5:38 PM Alexis Michon  wrote:

> Hello,
>
> We have build gromacs 2016.1 from source with "DMX_GPU=on" on a bi
> processor bi GPU machine, mdrun detect and run fine on only 1 GPU. How
> could we force mdrun to detect the second GPU ?
>

If it's compatible, powered and supported by your driver, then mdrun will
find it. Presumably nvidia-smi tool will help you work out what's going on.

We would like to run 2 mdrun instances on a machine equiped with 2 GPUs,
> each mdrun will use 1GPU. How could we tell  mdrun to use a specific GPU ?
>

Guidance is here
http://manual.gromacs.org/documentation/2016.1/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node

Mark


> Cheers,
> Alexis
>
> --
> Citation : "It’s not enough to be busy; so are the ants. The question is:
> what are we busy about?" - Henry David Thoreau
> Alexis MICHON, responsable informatique
> CNRS IBCP, 7 passage du vercors, 69007 LYON, France
> Mail : alexis.mic...@ibcp.fr  Tel : 04.72.72.26.03 - 06.27.56.34.80
> CNRS IBCP - UMS 5760 - http://www.ibcp.fr/
> Empreinte : C9:45:2D:7C:79:7F:0B:79:CA:C8:0B:68:41:A2:8C:EE:EA:72:82:34
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary file

2016-12-01 Thread Mark Abraham
Hi,

That shouldn't happen. Are you able to install and run other software
similarly?

I note that your gmx_mpi binary is quite old, and your cmake command
misspelled  -DCMAKE_INSTALL_PRIFFIX=$SOFT so perhaps you are simply not
doing what you think/say you are doing.

Mark

On Thu, Dec 1, 2016 at 10:56 AM Andrew Bostick 
wrote:

> Dear gromacs users,
>
> export SOFT=$HOME/installation
> export CPPFLAGS="-I$SOFT/include"
> export LDFLAGS="-L$SOFT/lib"
> export PATH="$PATH":$SOFT/bin
>
> tar xvf cmake-3.6.1.tar.gz
> cd ../cmake-3.6.1.
> ./configure --prefix=$SOFT
>
> make
> make install
>
>
>
> tar xvzf gromacs-5.1.3.tar.gz
>
> cd ../gromacs-5.1.3
>
> mkdir build
>
> cd build
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
>
> -DGMX_MPI=ON -DGMX_GPU=ON -DCMAKE_INSTALL_PRIFFIX=$SOFT
>
> make
>
> make check
>
> make install
>
> source /home/linux/installation/gromacs/bin/GMXRC
>
> *** The GMXRC file is as follows:
>
> # This is a convenience script to determine which
>
> # type of shell you have, and then run GMXRC.[csh|bash|zsh]
>
> # from the Gromacs binary directory.
>
> # If you only use one shell you can copy that GMXRC.* instead.
>
> # only csh/tcsh set the variable $shell (note: lower case!)
>
> # but check for the contents to be sure, since some environments may
>
> # set it also for other shells
>
> echo $shell | grep -q csh && goto CSH
>
> # if we got here, shell is bsh/bash/zsh/ksh
>
> . /home/linux/installation/gromacs/bin/GMXRC.bash
>
> return
>
> # csh/tcsh jump here
>
> CSH:
>
> source /home/linux/installation/gromacs/bin/GMXRC.csh
>
> ~
>
>
> ~
>
>
> ~
>
>
> ~
>
>
> ~
>
>
> "GMXRC" 18L, 594C
>
> I use following commands:
>
> source /home/linux/installation/gromacs/bin/GMXRC
>
> gmx_mpi
>
> But, I encountered with:
>
> bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary
> file
>
>
> In /home/linux/installation/gromacs/bin, I checked executablity of gmx_mpi
> file:
>
>
> -rwxr-xr-x 1 linux linux3378 2016-07-13 05:56 demux.pl*
>
> -rwxr-xr-x 1 linux linux149700 2016-07-14 16:12 gmx-completion.bash*
>
> -rwxr-xr-x 1 linux linux41 2016-09-04 04:44
> gmx-completion-gmx_mpi.bash*
>
> -rwxr-xr-x 1 linux linux275213 2016-09-04 04:44 gmx_mpi*
>
> -rwxr-xr-x 1 linux linux594 2016-11-29 03:42 GMXRC*
>
> -rwxr-xr-x 1 linux linux2758 2016-11-29 03:43 GMXRC.bash*
>
> -rwxr-xr-x 1 linux linux2995 2016-09-04 02:53 GMXRC.csh*
>
> -rwxr-xr-x 1 linux linux118 2016-09-04 02:53 GMXRC.zsh*
>
> -rwxr-xr-x 1 linux linux8217 2016-07-13 05:56 xplor2gmx.pl*
>
>
> I used chmod 777 *.*. After that
>
> -rwxrwxrwx 1 linux linux   3378 2016-07-13 05:56 demux.pl*
>
> -rwxrwxrwx 1 linux linux 149700 2016-07-14 16:12 gmx-completion.bash*
>
> -rwxrwxrwx 1 linux linux 41 2016-09-04 04:44
> gmx-completion-gmx_mpi.bash*
>
> -rwxrwxrwx 1 linux linux 275213 2016-09-04 04:44 gmx_mpi*
>
> -rwxr-xr-x 1 linux linux594 2016-11-29 03:42 GMXRC*
>
> -rwxrwxrwx 1 linux linux   2758 2016-11-29 03:43 GMXRC.bash*
>
> -rwxrwxrwx 1 linux linux   2995 2016-09-04 02:53 GMXRC.csh*
>
> -rwxrwxrwx 1 linux linux118 2016-09-04 02:53 GMXRC.zsh*
>
> -rwxrwxrwx 1 linux linux   8217 2016-07-13 05:56 xplor2gmx.pl*
>
>
> After using the following commands:
>
> source /home/linux/installation/gromacs/bin/GMXRC
>
> gmx_mpi
>
> Again, I encountered with:
>
> bash: /home/linux/installation/gromacs/bin/gmx_mpi: cannot execute binary
> file
>
> Any help will be highly appreciated.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx insert-molecules not adding the required number of molecules

2016-12-01 Thread Mark Abraham
Hi,

Adding a big thing, then lots of small things randomly and then lots of
medium sized things randomly isn't going to work very well to give you
something approximately close-packed at the end. You're just not likely to
end up with exactly enough holes suitable for things of medium size. So
work from biggest to smallest. You may find you want to play with the radii
of urea atoms to get the right number to insert (into holes nominally too
small for them), and then very gentle equilibration to start off with.

Mark

On Wed, Nov 30, 2016 at 1:56 PM soumadwip ghosh 
wrote:

> Hi all,
>  I have recently been looking at the dynamics of protein in
> presence of urea+osmolyte. The protein I have taken is a chicken villin
> headpiece subdomain (HP-36). It is inside a cubic box of 141 nm3 volume. 8M
> urea corresponding to this box volume = 680 number of urea molecules and I
> want the osmolyte to be in half of its molarity. Thus, the required number
> of osmolyte is 340. Previously, I have worked with 1:5 molar ratio of
> urea:osmolyte and it proceeds without any difficulty. This time what I
> encounter is:
>
> 1. 680 number of urea molecules goes inside the protein box using gmx
> insert-molecules command without any difficulty.
>
> 2. When I call the same command for inserting my osmolyte (340 in number)
> it can only insert 220 molecules even with the -nmol x, -try x option.
>
> 3. If I try to insert a random number of osmolytes (say 500) it takes a
> long time for gmx insert-molecules before printing the execution has been
> 'killed' and it does not generate any output file.
>
> 4. If I try a bigger box, then the required number of species increases and
> I face the same problem with my osmolyte even though urea addition is
> successful.
>
> What is happening here? Is there a way to play around with the gmx
> insert-molecules/genbox command to obtain the desired number of molecules
> inside a box of known dimensions in which something already resides (in my
> case protein+8M urea). I shall be obliged if someone helps me out with this
>
> P.S: I am using GROMACS 5.0.4 in combination with OPLS-AA force field.
>
> Thnaks and regards,
> Soumadwip Ghosh
> Research Associate
> Indian Institute of Technology Bombay
> India
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PME

2016-12-01 Thread Mark Abraham
Hi,

Not directly, and anyway the PME grid only has the long-ranged component of
anything. You could use mdrun -rerun to compute the potential via adding a
new test particle to the system in its own energy group and observing its
energy as you move its position.

Mark

On Wed, Nov 30, 2016 at 7:11 PM Moradzadeh, Alireza 
wrote:

> Dear GROMACS Users,
>
> I am looking to calculate electric potential for confined fluids. I guess
> that PME solver of GROMACS should have electrostatic potential value in its
> grids but I am not sure how to get access to this information. Can you let
> me know if there is a way to find access to those information.
>
> Thanks,
> Alireza
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Order Parameter with an all-atom force field

2016-12-01 Thread Justin Lemkul



On 11/29/16 7:50 PM, Mohsen Ramezanpour wrote:

Dear Gromacs users,

I have a question on S_cd order parameters:

Using g_order for a lipid like DSPC, I can get the values for C2 to C17,
although DS tail has 18 carbons.

I know the reason when I use a united atom force field like Gromos54A7.
This is mainly based on the algorithm which it works for united atom.
However, when I use Charmm36FF, the algorithm for calculation of S_cd
change (based on following article):

*Vermeer, L. S., De Groot, B. L., Réat, V., Milon, A., & Czaplicki, J.
(2007). Acyl chain order parameter profiles in phospholipid bilayers:
computation from molecular dynamics simulations and comparison with 2H NMR
experiments. European Biophysics Journal, 36(8), 919-931.*

I was wondering if g-order will use the same algorithm which it use for
uited atom for all atom too?


Yes.  Order parameters are calculated from the positions of the carbon atoms, 
which is why you can't get terminal values.



If NO, how can one tell g_order to use another algorithm which match the
all atom force field?

Besides, in some articles done by NAMD, I have seen authors have reported
values for C18 too when used Charmm36FF.



They probably used different analysis software and different methods.

-Justin


Thanks in advance for your comments.

Cheers
Mohsen







--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] (no subject)

2016-12-01 Thread Justin Lemkul



On 12/1/16 7:24 AM, Gregory Man Kai Poon wrote:

Hi all:


I am trying to use hbond in Gromacs 5.1.4 to analyze water-mediated hydrogen
bonding between two objects simulated in water.  The GROMACS manual discusses
this in a Figure (9.8) - "water insertion". There is nothing in the online
documentation as to how this should be done except a single mention with the
-hbm option, which I tried.  It generated .xpm files such as the one
attached.  They open, as far as I can tell, a very vertically compressed plot
which I can make nothing out of.  Attempts to convert them to eps using
xpm2eps output similar results.



You can use an .m2p file to adjust the sizes of the x- and y-axes to make it 
legible.  The real value is in the data within, though.  You have to map the 
actual participating groups (the output of of -hbn) with the individual time 
series in the .xpm from -hbm.




So my questions are two-fold: 1) What is happening with the .xpm files?  2)
Am I using the correct hbond option to enumerate water-mediated hydrogen
bonds?



To actually analyze water-mediated H-bonds requires additional work that GROMACS 
tools don't do.  You need to analyze water H-bonds with the two groups of 
interest separately, then determine if the same water is H-bonded to a moiety in 
both of those groups in the same frame.  This is where tracing the H-bonds in 
the .xpm file is useful.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.