Re: [gmx-users] creating topology for ligand

2018-03-02 Thread neelam wafa
ok thanks.

On 2 Mar 2018 22:25, "Justin Lemkul"  wrote:

>
>
> On 3/2/18 12:18 PM, neelam wafa wrote:
>
>> Thanks Justin
>>
>> Can you please suggest me any article or reading that can help me to
>> understand the factors to be considered while choosing a force field or to
>> parameterize the ligand?
>>
>
> I could suggest hundreds. Start with even a basic Google search for MD
> force field review papers, references in the GROMACS manual for each of the
> force fields, etc.
>
> -Justin
>
> On 1 Mar 2018 18:14, "Justin Lemkul"  wrote:
>>
>>
>>> On 3/1/18 8:03 AM, neelam wafa wrote:
>>>
>>> Dear gmx users

 I am trying to run a protein ligand simmulation. How can i create
 topolgy
 for ligand. prodrg topology is not reliable then which server or
 software
 can be used?  can topology be created by T LEEP off ambertools package
 for
 gromacs??

 The method you use depends on the parent force field you've chosen. The
>>> PRODRG and ATB servers are for GROMOS force fields, GAFF methods (RED
>>> server, antechamber, etc) are for AMBER. ParamChem/CGenFF are for CHARMM.
>>> You can't mix and match. You need to do your homework here to make sure
>>> that the force field you've chosen for your protein is an appropriate
>>> model, as well as whether or not you can feasibly parametrize your ligand
>>> (not an easy task and generally not advisable for a beginner, because you
>>> should *never* trust a black box and always validate the topology in a
>>> manner consistent with the parent force field).
>>>
>>> secondly how to select the boxtype as I am new to simmulation so cant fix
>>>
 the problem.  Please also guide me what factors should be considered to
 select the water model??

 The water model is part of the force field; each has been parametrized
>>> with a particular model (though some do show some insensitivity if
>>> changed,
>>> but again this is your homework to do before ever thinking about doing
>>> anything in GROMACS or any other MD engine).
>>>
>>> The box shape has no effect as long as you set up a suitable box-solute
>>> distance that satisfies the minimum image convention. You can use
>>> non-cubic
>>> boxes to speed up the simulation if it's just a simple protein (complex)
>>> in
>>> water.
>>>
>>> -Justin
>>>
>>> --
>>> ==
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Assistant Professor
>>> Virginia Tech Department of Biochemistry
>>>
>>> 303 Engel Hall
>>> 340 West Campus Dr.
>>> Blacksburg, VA 24061
>>>
>>> jalem...@vt.edu | (540) 231-3129
>>> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>>>
>>> ==
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/Support
>>> /Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] atom naming needs to be considered.

2018-03-02 Thread neelam wafa
I am trying md simmulation of a protein kinase and ligand to check the
stability of the protein ligand complex formed.

On 2 Mar 2018 22:22, "Mark Abraham"  wrote:

> Hi,
>
> We can't tell there's a problem because we know nothing about what you are
> trying to do except it involves pdb2gmx
>
> Mark
>
> On Fri, Mar 2, 2018, 18:15 neelam wafa  wrote:
>
> > Is there any way to fix this problem with the start and end terminals?
> >
> > On 2 Mar 2018 22:12, "neelam wafa"  wrote:
> >
> > > Thanks  dear
> > >
> > > So it means I can continue with rest of the process. Will it not effect
> > > the results?
> > >
> > > On 1 Mar 2018 19:53, "Justin Lemkul"  wrote:
> > >
> > >
> > >
> > > On 3/1/18 9:35 AM, neelam wafa wrote:
> > >
> > >> Hi!
> > >> Dear all I am running pdb2gmx command to create the protein topology
> but
> > >> getting this error. please guide me how to fix this problem.
> > >>
> > >> WARNING: WARNING: Residue 1 named TRP of a molecule in the input file
> > was
> > >> mapped
> > >> to an entry in the topology database, but the atom H used in
> > >> an interaction of type angle in that entry is not found in the
> > >> input file. Perhaps your atom and/or residue naming needs to be
> > >> fixed.
> > >>
> > >>
> > >>
> > >> WARNING: WARNING: Residue 264 named ASN of a molecule in the input
> file
> > >> was
> > >> mapped
> > >> to an entry in the topology database, but the atom O used in
> > >> an interaction of type angle in that entry is not found in the
> > >> input file. Perhaps your atom and/or residue naming needs to be
> > >> fixed.
> > >>
> > >
> > > Neither of those is an error (warnings, notes, and errors are all
> > > different in GROMACS), and correspond to normal output when patching N-
> > and
> > > C-termini due to the deletion of atoms.
> > >
> > > -Justin
> > >
> > > --
> > > ==
> > >
> > > Justin A. Lemkul, Ph.D.
> > > Assistant Professor
> > > Virginia Tech Department of Biochemistry
> > >
> > > 303 Engel Hall
> > > 340 West Campus Dr.
> > > Blacksburg, VA 24061
> > >
> > > jalem...@vt.edu | (540) 231-3129
> > > http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
> > >
> > > ==
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > >
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 2018: large performance variations

2018-03-02 Thread Szilárd Páll
BTW, we have considered adding a warmup delay to the tuner, would you be
willing to help testing (or even contributing such a feature)?

--
Szilárd

On Fri, Mar 2, 2018 at 7:28 PM, Szilárd Páll  wrote:

> Hi Michael,
>
> Can you post full logs, please? This is likely related to a known issue
> where CPU cores (and in some cases GPUs too) may take longer to clock up
> and get a stable performance than the time the auto-tuner takes to do a few
> cycles of measurements.
>
> Unfortunately we do not have a good solution for this, but what you can do
> make runs more consistent is:
> - try "warming up" the CPU/GPU before production runs (e.g. stress -c or
> just a dummy 30 sec mdrun run)
> - repeat the benchmark a few times, see which cutoff / grid setting is
> best, set that in the mdp options and run with -notunepme
>
> Of course the latter may be too tedious if you have a variety of
> systems/inputs to run.
>
> Regarding tune_pme: that issue is related to resetting timings too early
> (for -resetstep see mdrun -h -hidden); not sure if we have a fix, but
> either way tune_pme is more suited for parallel runs' separate PME rank
> count tuning.
>
> Cheers,
>
> --
> Szilárd
>
> On Thu, Mar 1, 2018 at 7:11 PM, Michael Brunsteiner 
> wrote:
>
>> Hi,I ran a few MD runs with identical input files (the SAME tpr file. mdp
>> included below) on the same computer
>> with gmx 2018 and observed rather large performance variations (~50%) as
>> in:
>> grep Performance */mcz1.log7/mcz1.log:Performance:   98.510
>> 0.244
>> 7d/mcz1.log:Performance:  140.7330.171
>> 7e/mcz1.log:Performance:  115.5860.208
>> 7f/mcz1.log:Performance:  139.1970.172
>>
>> turns out the load balancing effort that is done at the beginning gives
>> quite different results:
>> grep "optimal pme grid" */mcz1.log
>> 7/mcz1.log:  optimal pme grid 32 32 28, coulomb cutoff 1.394
>> 7d/mcz1.log:  optimal pme grid 36 36 32, coulomb cutoff 1.239
>> 7e/mcz1.log:  optimal pme grid 25 24 24, coulomb cutoff 1.784
>> 7f/mcz1.log:  optimal pme grid 40 36 32, coulomb cutoff 1.200
>>
>> next i tried tune_pme as in:gmx tune_pme -mdrun 'gmx mdrun' -nt 6 -ntmpi
>> 1 -ntomp 6 -pin on -pinoffset 0 -s mcz1.tpr  -pmefft cpu -pinstride 1 -r 10
>> which didn't work ... in some log file it says:Fatal error:
>> PME tuning was still active when attempting to reset mdrun counters at
>> step
>> 1500. Try resetting counters later in the run, e.g. with gmx mdrun
>> -resetstep.
>>
>> i found no documentation regarding "-resetstep"  ...
>>
>> i could of course optimize the PME grid manually but since i plan to run
>> a large numberof jobs with different systems and sizes this would be a lot
>> of work and if possible i'd like to avoid that.
>> is there any way to ask gmx to perform more tests at the beginning of
>> therun when optimizing the PME grid?or is using "-notunepme -dlb yes" an
>> option, and does the latter require aconcurrent optimization of the domain
>> decomposition, if so how is this done?
>> thanks for any help!
>> michael
>>
>>
>> mdp:
>> integrator= md
>> dt= 0.001
>> nsteps= 50
>> comm-grps = System
>> ;
>> nstxout   = 0
>> nstvout   = 0
>> nstfout   = 0
>> nstlog= 1000
>> nstenergy = 1000
>> ;
>> nstlist  = 40
>> ns_type  = grid
>> pbc  = xyz
>> rlist= 1.2
>> cutoff-scheme= Verlet
>> ;
>> coulombtype  = PME
>> rcoulomb = 1.2
>> vdw_type = cut-off
>> rvdw = 1.2
>> ;
>> constraints  = none
>> ;
>> tcoupl = v-rescale
>> tau-t  = 0.1
>> ref-t  = 300
>> tc-grps= System
>> ;
>> pcoupl = berendsen
>> pcoupltype = anisotropic
>> tau-p  = 2.0
>> compressibility= 4.5e-5 4.5e-5 4.5e-5 0 0 0
>> ref-p  = 1 1 1 0 0 0
>> ;
>> annealing  = single
>> annealing-npoints  = 2
>> annealing-time = 0 500
>> annealing-temp = 500 480
>>
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] 2018: large performance variations

2018-03-02 Thread Szilárd Páll
Hi Michael,

Can you post full logs, please? This is likely related to a known issue
where CPU cores (and in some cases GPUs too) may take longer to clock up
and get a stable performance than the time the auto-tuner takes to do a few
cycles of measurements.

Unfortunately we do not have a good solution for this, but what you can do
make runs more consistent is:
- try "warming up" the CPU/GPU before production runs (e.g. stress -c or
just a dummy 30 sec mdrun run)
- repeat the benchmark a few times, see which cutoff / grid setting is
best, set that in the mdp options and run with -notunepme

Of course the latter may be too tedious if you have a variety of
systems/inputs to run.

Regarding tune_pme: that issue is related to resetting timings too early
(for -resetstep see mdrun -h -hidden); not sure if we have a fix, but
either way tune_pme is more suited for parallel runs' separate PME rank
count tuning.

Cheers,

--
Szilárd

On Thu, Mar 1, 2018 at 7:11 PM, Michael Brunsteiner 
wrote:

> Hi,I ran a few MD runs with identical input files (the SAME tpr file. mdp
> included below) on the same computer
> with gmx 2018 and observed rather large performance variations (~50%) as
> in:
> grep Performance */mcz1.log7/mcz1.log:Performance:   98.510
> 0.244
> 7d/mcz1.log:Performance:  140.7330.171
> 7e/mcz1.log:Performance:  115.5860.208
> 7f/mcz1.log:Performance:  139.1970.172
>
> turns out the load balancing effort that is done at the beginning gives
> quite different results:
> grep "optimal pme grid" */mcz1.log
> 7/mcz1.log:  optimal pme grid 32 32 28, coulomb cutoff 1.394
> 7d/mcz1.log:  optimal pme grid 36 36 32, coulomb cutoff 1.239
> 7e/mcz1.log:  optimal pme grid 25 24 24, coulomb cutoff 1.784
> 7f/mcz1.log:  optimal pme grid 40 36 32, coulomb cutoff 1.200
>
> next i tried tune_pme as in:gmx tune_pme -mdrun 'gmx mdrun' -nt 6 -ntmpi 1
> -ntomp 6 -pin on -pinoffset 0 -s mcz1.tpr  -pmefft cpu -pinstride 1 -r 10
> which didn't work ... in some log file it says:Fatal error:
> PME tuning was still active when attempting to reset mdrun counters at step
> 1500. Try resetting counters later in the run, e.g. with gmx mdrun
> -resetstep.
>
> i found no documentation regarding "-resetstep"  ...
>
> i could of course optimize the PME grid manually but since i plan to run a
> large numberof jobs with different systems and sizes this would be a lot of
> work and if possible i'd like to avoid that.
> is there any way to ask gmx to perform more tests at the beginning of
> therun when optimizing the PME grid?or is using "-notunepme -dlb yes" an
> option, and does the latter require aconcurrent optimization of the domain
> decomposition, if so how is this done?
> thanks for any help!
> michael
>
>
> mdp:
> integrator= md
> dt= 0.001
> nsteps= 50
> comm-grps = System
> ;
> nstxout   = 0
> nstvout   = 0
> nstfout   = 0
> nstlog= 1000
> nstenergy = 1000
> ;
> nstlist  = 40
> ns_type  = grid
> pbc  = xyz
> rlist= 1.2
> cutoff-scheme= Verlet
> ;
> coulombtype  = PME
> rcoulomb = 1.2
> vdw_type = cut-off
> rvdw = 1.2
> ;
> constraints  = none
> ;
> tcoupl = v-rescale
> tau-t  = 0.1
> ref-t  = 300
> tc-grps= System
> ;
> pcoupl = berendsen
> pcoupltype = anisotropic
> tau-p  = 2.0
> compressibility= 4.5e-5 4.5e-5 4.5e-5 0 0 0
> ref-p  = 1 1 1 0 0 0
> ;
> annealing  = single
> annealing-npoints  = 2
> annealing-time = 0 500
> annealing-temp = 500 480
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] CMAP entries for D-residues with GROMACS (Justin Lemkul)

2018-03-02 Thread ABEL Stephane
OK I see thank you, Justin

Bye

On 3/2/18 10:50 AM, ABEL Stephane wrote:
> Dear all,
>
> I am interested to simulate a system with gramicidin A  that contains D-AAs 
> (D-LEU and D-VaL)  and I am wondering if the CMAP entries in the cmap.itp 
> file (charmm36-jul2017.ff) are used for these types of AA ? I am asking this 
> because I see that the CHARMM force field library contains a file 
> (toppar_all36_prot_mod_d_aminoacids.str) where the CMAP parameter seem to be 
> redefined.

The parameters are not "redefined," they are given for the D-amino
acids, which have a different C-alpha type (CTD1 instead of CT1). The
latest CHARMM36 port supports D-amino acids, with the exception of the
.hdb file, which does not have entries for them.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] stride

2018-03-02 Thread Szilárd Páll
Indeed if the two jobs do not know of each other, both will pin to the same
set of threads -- the default _shoud_ be 0,1,2,3,4,5 because it assumes
that you want to maximize performance with 6 threads only, and to do so it
pins one thread/core (i.e. uses stride 2).

When sharing a node among two runs, you will get best performance if you
use the first half of the cores for one of the runs, the rest for the
other, i.e.
-pinoffset 0 -pinstride 1 => will use [   0   6] [   1   7] [   2   8]
-pinoffset 6 -pinstride 1 => will use [   3   9] [   4  10] [   5  11]


--
Szilárd

On Fri, Mar 2, 2018 at 3:31 PM, Michael Brunsteiner 
wrote:

>  hi
> about my hardware gmx has to say:
>   Hardware topology: Basic
> Sockets, cores, and logical processors:
>   Socket  0: [   0   6] [   1   7] [   2   8] [   3   9] [   4  10]
> [   5  11]
>
> if i want to run two gmx jobs simultaneously on this one node,i usually do
> something like:
> promt> gmx mdrun -deffnm name1 -nt 6 -pin on -ntmpi 1 -ntomp 6 > er1 2>&1 &
> promt> gmx mdrun -deffnm name2 -nt 6 -pin on -ntmpi 1 -ntomp 6 > er2 2>&1 &
>
> given the above numbers what are the best choices for: pinoffset and
> pinstride?i might be too dumb, but what the doc and the gmx webpage say
> about these options is not clear to me ..
> if i let gmx decide it does not necessarily seem to make the best choice
> in termsof the resulting performance, which is not surprising as neither of
> the two jobs knowsabout the presence of the other one...
>
> thanks!michael
>
>
>
>
>
>
>
>
> === Why be happy when you could be normal?
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
On Fri, Mar 2, 2018 at 1:57 PM, Mahmood Naderan 
wrote:

> Sorry for the confusion. My fault...
> I saw my previous post and found that I missed something. In fact, I
> couldn't run "-pme gpu".
>
> So, once again, I ran all the commands and uploaded the log files
>
>
> gmx mdrun -nobackup -nb cpu -pme cpu -deffnm md_0_1
> https://pastebin.com/RNT4XJy8
>
>
> gmx mdrun -nobackup -nb cpu -pme gpu -deffnm md_0_1
> https://pastebin.com/7BQn8R7g
> This run shows an error on the screen which is not shown in the log file.
> So please also see https://pastebin.com/KHg6FkBz


That's expected, only offloading PME is not supported.,different offload
modes supported are:
- nonbonded offload
- nonbonded + full PME offload
- nonbonded + PME mixed mode offload (FFTs run on the CPU)



>
>
>
> gmx mdrun -nobackup -nb gpu -pme cpu -deffnm md_0_1
> https://pastebin.com/YXYj23tB
>
>
>
> gmx mdrun -nobackup -nb gpu -pme gpu -deffnm md_0_1
> https://pastebin.com/P3X4mE5y
>
>
> offloadable
>
>
> From the results, it seems that running the pme on the cpu is better than
> gpu. The fastest command here is -nb gpu -pme cpu
>

Right, same as before except that it looks like this time is ~5% slower
(likely the auto-tuner did not manage two switch to the ideal setting).


>
>
> Still I have the question that while GPU is utilized, the CPU is also
> busy. So, I was thinking that the source code uses cudaDeviceSynchronize()
> where the CPU enters a busy loop.
>

Yes, CPU and GPU run concurrently and work on independent tasks, when the
CPU is done it has to wait for the GPU before it can proceed with
constraints/integration.

To get a better overview, please read some of the GROMACS papers (
http://www.gromacs.org/Gromacs_papers) or tldr see https://goo.gl/AGv6hy
(around slides 12-15).

Cheers,
--
Szilárd


>
>
>
> Regards,
> Mahmood
>
>
>
>
>
>
> On Friday, March 2, 2018, 3:24:41 PM GMT+3:30, Szilárd Páll <
> pall.szil...@gmail.com> wrote:
>
>
>
>
>
> Once again, full log files, please, not partial cut-and-paste, please.
>
> Also, you misread something because your previous logs show:
> -nb cpu -pme gpu: 56.4 ns/day
> -nb cpu -pme gpu -pmefft cpu 64.6 ns/day
> -nb cpu -pme cpu 67.5 ns/day
>
> So both mixed mode PME and PME on CPU are faster, the latter slightly
> faster than the former.
>
> This is about as much as you can do, I think. Your GPU is just too slow to
> get more performance out of it and the runs are GPU-bound. You might be
> able to get a bit more performance with some tweaks (compile mdrun with
> AVX2_256, use a newer fftw, use a newer gcc), but expect marginal gains.
>
> Cheers,
>
> --
> Szilárd
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] creating topology for ligand

2018-03-02 Thread Justin Lemkul



On 3/2/18 12:18 PM, neelam wafa wrote:

Thanks Justin

Can you please suggest me any article or reading that can help me to
understand the factors to be considered while choosing a force field or to
parameterize the ligand?


I could suggest hundreds. Start with even a basic Google search for MD 
force field review papers, references in the GROMACS manual for each of 
the force fields, etc.


-Justin


On 1 Mar 2018 18:14, "Justin Lemkul"  wrote:



On 3/1/18 8:03 AM, neelam wafa wrote:


Dear gmx users

I am trying to run a protein ligand simmulation. How can i create topolgy
for ligand. prodrg topology is not reliable then which server or software
can be used?  can topology be created by T LEEP off ambertools package for
gromacs??


The method you use depends on the parent force field you've chosen. The
PRODRG and ATB servers are for GROMOS force fields, GAFF methods (RED
server, antechamber, etc) are for AMBER. ParamChem/CGenFF are for CHARMM.
You can't mix and match. You need to do your homework here to make sure
that the force field you've chosen for your protein is an appropriate
model, as well as whether or not you can feasibly parametrize your ligand
(not an easy task and generally not advisable for a beginner, because you
should *never* trust a black box and always validate the topology in a
manner consistent with the parent force field).

secondly how to select the boxtype as I am new to simmulation so cant fix

the problem.  Please also guide me what factors should be considered to
select the water model??


The water model is part of the force field; each has been parametrized
with a particular model (though some do show some insensitivity if changed,
but again this is your homework to do before ever thinking about doing
anything in GROMACS or any other MD engine).

The box shape has no effect as long as you set up a suitable box-solute
distance that satisfies the minimum image convention. You can use non-cubic
boxes to speed up the simulation if it's just a simple protein (complex) in
water.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] atom naming needs to be considered.

2018-03-02 Thread Mark Abraham
Hi,

We can't tell there's a problem because we know nothing about what you are
trying to do except it involves pdb2gmx

Mark

On Fri, Mar 2, 2018, 18:15 neelam wafa  wrote:

> Is there any way to fix this problem with the start and end terminals?
>
> On 2 Mar 2018 22:12, "neelam wafa"  wrote:
>
> > Thanks  dear
> >
> > So it means I can continue with rest of the process. Will it not effect
> > the results?
> >
> > On 1 Mar 2018 19:53, "Justin Lemkul"  wrote:
> >
> >
> >
> > On 3/1/18 9:35 AM, neelam wafa wrote:
> >
> >> Hi!
> >> Dear all I am running pdb2gmx command to create the protein topology but
> >> getting this error. please guide me how to fix this problem.
> >>
> >> WARNING: WARNING: Residue 1 named TRP of a molecule in the input file
> was
> >> mapped
> >> to an entry in the topology database, but the atom H used in
> >> an interaction of type angle in that entry is not found in the
> >> input file. Perhaps your atom and/or residue naming needs to be
> >> fixed.
> >>
> >>
> >>
> >> WARNING: WARNING: Residue 264 named ASN of a molecule in the input file
> >> was
> >> mapped
> >> to an entry in the topology database, but the atom O used in
> >> an interaction of type angle in that entry is not found in the
> >> input file. Perhaps your atom and/or residue naming needs to be
> >> fixed.
> >>
> >
> > Neither of those is an error (warnings, notes, and errors are all
> > different in GROMACS), and correspond to normal output when patching N-
> and
> > C-termini due to the deletion of atoms.
> >
> > -Justin
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Assistant Professor
> > Virginia Tech Department of Biochemistry
> >
> > 303 Engel Hall
> > 340 West Campus Dr.
> > Blacksburg, VA 24061
> >
> > jalem...@vt.edu | (540) 231-3129
> > http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
> >
> > ==
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/Support
> > /Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] creating topology for ligand

2018-03-02 Thread neelam wafa
Thanks Justin

Can you please suggest me any article or reading that can help me to
understand the factors to be considered while choosing a force field or to
parameterize the ligand?

On 1 Mar 2018 18:14, "Justin Lemkul"  wrote:

>
>
> On 3/1/18 8:03 AM, neelam wafa wrote:
>
>> Dear gmx users
>>
>> I am trying to run a protein ligand simmulation. How can i create topolgy
>> for ligand. prodrg topology is not reliable then which server or software
>> can be used?  can topology be created by T LEEP off ambertools package for
>> gromacs??
>>
>
> The method you use depends on the parent force field you've chosen. The
> PRODRG and ATB servers are for GROMOS force fields, GAFF methods (RED
> server, antechamber, etc) are for AMBER. ParamChem/CGenFF are for CHARMM.
> You can't mix and match. You need to do your homework here to make sure
> that the force field you've chosen for your protein is an appropriate
> model, as well as whether or not you can feasibly parametrize your ligand
> (not an easy task and generally not advisable for a beginner, because you
> should *never* trust a black box and always validate the topology in a
> manner consistent with the parent force field).
>
> secondly how to select the boxtype as I am new to simmulation so cant fix
>> the problem.  Please also guide me what factors should be considered to
>> select the water model??
>>
>
> The water model is part of the force field; each has been parametrized
> with a particular model (though some do show some insensitivity if changed,
> but again this is your homework to do before ever thinking about doing
> anything in GROMACS or any other MD engine).
>
> The box shape has no effect as long as you set up a suitable box-solute
> distance that satisfies the minimum image convention. You can use non-cubic
> boxes to speed up the simulation if it's just a simple protein (complex) in
> water.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] atom naming needs to be considered.

2018-03-02 Thread neelam wafa
Is there any way to fix this problem with the start and end terminals?

On 2 Mar 2018 22:12, "neelam wafa"  wrote:

> Thanks  dear
>
> So it means I can continue with rest of the process. Will it not effect
> the results?
>
> On 1 Mar 2018 19:53, "Justin Lemkul"  wrote:
>
>
>
> On 3/1/18 9:35 AM, neelam wafa wrote:
>
>> Hi!
>> Dear all I am running pdb2gmx command to create the protein topology but
>> getting this error. please guide me how to fix this problem.
>>
>> WARNING: WARNING: Residue 1 named TRP of a molecule in the input file was
>> mapped
>> to an entry in the topology database, but the atom H used in
>> an interaction of type angle in that entry is not found in the
>> input file. Perhaps your atom and/or residue naming needs to be
>> fixed.
>>
>>
>>
>> WARNING: WARNING: Residue 264 named ASN of a molecule in the input file
>> was
>> mapped
>> to an entry in the topology database, but the atom O used in
>> an interaction of type angle in that entry is not found in the
>> input file. Perhaps your atom and/or residue naming needs to be
>> fixed.
>>
>
> Neither of those is an error (warnings, notes, and errors are all
> different in GROMACS), and correspond to normal output when patching N- and
> C-termini due to the deletion of atoms.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] atom naming needs to be considered.

2018-03-02 Thread neelam wafa
Thanks  dear

So it means I can continue with rest of the process. Will it not effect the
results?

On 1 Mar 2018 19:53, "Justin Lemkul"  wrote:



On 3/1/18 9:35 AM, neelam wafa wrote:

> Hi!
> Dear all I am running pdb2gmx command to create the protein topology but
> getting this error. please guide me how to fix this problem.
>
> WARNING: WARNING: Residue 1 named TRP of a molecule in the input file was
> mapped
> to an entry in the topology database, but the atom H used in
> an interaction of type angle in that entry is not found in the
> input file. Perhaps your atom and/or residue naming needs to be
> fixed.
>
>
>
> WARNING: WARNING: Residue 264 named ASN of a molecule in the input file was
> mapped
> to an entry in the topology database, but the atom O used in
> an interaction of type angle in that entry is not found in the
> input file. Perhaps your atom and/or residue naming needs to be
> fixed.
>

Neither of those is an error (warnings, notes, and errors are all different
in GROMACS), and correspond to normal output when patching N- and C-termini
due to the deletion of atoms.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Parmbsc1 force-field

2018-03-02 Thread Dan Gil
Yes! Thank you so much.

On Fri, Mar 2, 2018 at 11:37 AM, Mark Abraham 
wrote:

> Hi,
>
> This is not an official GROMACS offering, so as always, buyer beware. But
> the forcefield.doc file notes the source of the Na+ parameters as
> https://pubs.acs.org/doi/abs/10.1021/ja00131a018. Does that cover the
> question?
>
> Mark
>
> On Fri, Mar 2, 2018 at 5:20 PM Dan Gil  wrote:
>
> > Hello, update here.
> >
> > I think there is a possibility that the parmbsc1 force-field updated on
> the
> > gromacs website has some incorrect values.
> >
> > In the parmbsc1 paper (https://www.nature.com/articles/nmeth.3658.pdf)
> > they
> > say they use Na+ parameters from this paper (
> > http://aip.scitation.org/doi/pdf/10.1063/1.466363).
> >
> > .sigma (Å)epsilon (kcal/mol)
> > Na+   2.350  0.1300
> >
> > Here is what I find in the GROMACS force-field.
> >
> > .sigma (nm)epsilon (kJ/mol)
> > Na+   0.25840.4184
> >
> > I would like to directly contact the author, but I have no means at the
> > moment.
> >
> > Best Regards,
> >
> > Dan
> >
> > On Thu, Mar 1, 2018 at 7:00 PM, Dan Gil  wrote:
> >
> > > Hi,
> > >
> > > I am using the parmbsc1 force-field (http://www.gromacs.org/@api/d
> > > eki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
> > > original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
> > > from, but I am having trouble finding them.
> > >
> > > The Amber17 manual suggests that this paper (
> > https://pubs.acs.org/doi/pdf/
> > > 10.1021/ct500918t) is the source for monovalent ions. But, the values
> > > from the GROMACS parmbsc1 force-field (ffnonbonded.itp) does not match
> > the
> > > values from the paper, I think.
> > >
> > > Could you point me to the right direction? Citing the original paper is
> > > something important to me, but I have apparently hit a dead end.
> > >
> > > Best Regards,
> > >
> > > Dan Gil
> > > PhD Student
> > > Department of Chemical and Biomolecular Engineering
> > > Case Western Reserve University
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Parmbsc1 force-field

2018-03-02 Thread Mark Abraham
Hi,

This is not an official GROMACS offering, so as always, buyer beware. But
the forcefield.doc file notes the source of the Na+ parameters as
https://pubs.acs.org/doi/abs/10.1021/ja00131a018. Does that cover the
question?

Mark

On Fri, Mar 2, 2018 at 5:20 PM Dan Gil  wrote:

> Hello, update here.
>
> I think there is a possibility that the parmbsc1 force-field updated on the
> gromacs website has some incorrect values.
>
> In the parmbsc1 paper (https://www.nature.com/articles/nmeth.3658.pdf)
> they
> say they use Na+ parameters from this paper (
> http://aip.scitation.org/doi/pdf/10.1063/1.466363).
>
> .sigma (Å)epsilon (kcal/mol)
> Na+   2.350  0.1300
>
> Here is what I find in the GROMACS force-field.
>
> .sigma (nm)epsilon (kJ/mol)
> Na+   0.25840.4184
>
> I would like to directly contact the author, but I have no means at the
> moment.
>
> Best Regards,
>
> Dan
>
> On Thu, Mar 1, 2018 at 7:00 PM, Dan Gil  wrote:
>
> > Hi,
> >
> > I am using the parmbsc1 force-field (http://www.gromacs.org/@api/d
> > eki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
> > original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
> > from, but I am having trouble finding them.
> >
> > The Amber17 manual suggests that this paper (
> https://pubs.acs.org/doi/pdf/
> > 10.1021/ct500918t) is the source for monovalent ions. But, the values
> > from the GROMACS parmbsc1 force-field (ffnonbonded.itp) does not match
> the
> > values from the paper, I think.
> >
> > Could you point me to the right direction? Citing the original paper is
> > something important to me, but I have apparently hit a dead end.
> >
> > Best Regards,
> >
> > Dan Gil
> > PhD Student
> > Department of Chemical and Biomolecular Engineering
> > Case Western Reserve University
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Parmbsc1 force-field

2018-03-02 Thread Dan Gil
Hello, update here.

I think there is a possibility that the parmbsc1 force-field updated on the
gromacs website has some incorrect values.

In the parmbsc1 paper (https://www.nature.com/articles/nmeth.3658.pdf) they
say they use Na+ parameters from this paper (
http://aip.scitation.org/doi/pdf/10.1063/1.466363).

.sigma (Å)epsilon (kcal/mol)
Na+   2.350  0.1300

Here is what I find in the GROMACS force-field.

.sigma (nm)epsilon (kJ/mol)
Na+   0.25840.4184

I would like to directly contact the author, but I have no means at the
moment.

Best Regards,

Dan

On Thu, Mar 1, 2018 at 7:00 PM, Dan Gil  wrote:

> Hi,
>
> I am using the parmbsc1 force-field (http://www.gromacs.org/@api/d
> eki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
> original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
> from, but I am having trouble finding them.
>
> The Amber17 manual suggests that this paper (https://pubs.acs.org/doi/pdf/
> 10.1021/ct500918t) is the source for monovalent ions. But, the values
> from the GROMACS parmbsc1 force-field (ffnonbonded.itp) does not match the
> values from the paper, I think.
>
> Could you point me to the right direction? Citing the original paper is
> something important to me, but I have apparently hit a dead end.
>
> Best Regards,
>
> Dan Gil
> PhD Student
> Department of Chemical and Biomolecular Engineering
> Case Western Reserve University
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] CMAP entries for D-residues with GROMACS

2018-03-02 Thread Justin Lemkul



On 3/2/18 10:50 AM, ABEL Stephane wrote:

Dear all,

I am interested to simulate a system with gramicidin A  that contains D-AAs 
(D-LEU and D-VaL)  and I am wondering if the CMAP entries in the cmap.itp file 
(charmm36-jul2017.ff) are used for these types of AA ? I am asking this because 
I see that the CHARMM force field library contains a file 
(toppar_all36_prot_mod_d_aminoacids.str) where the CMAP parameter seem to be 
redefined.


The parameters are not "redefined," they are given for the D-amino 
acids, which have a different C-alpha type (CTD1 instead of CT1). The 
latest CHARMM36 port supports D-amino acids, with the exception of the 
.hdb file, which does not have entries for them.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] CMAP entries for D-residues with GROMACS

2018-03-02 Thread ABEL Stephane
Dear all, 

I am interested to simulate a system with gramicidin A  that contains D-AAs 
(D-LEU and D-VaL)  and I am wondering if the CMAP entries in the cmap.itp file 
(charmm36-jul2017.ff) are used for these types of AA ? I am asking this because 
I see that the CHARMM force field library contains a file 
(toppar_all36_prot_mod_d_aminoacids.str) where the CMAP parameter seem to be 
redefined.

Thank you for you response. 

Stéphane
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] stride

2018-03-02 Thread Michael Brunsteiner
 hi
about my hardware gmx has to say:
  Hardware topology: Basic
    Sockets, cores, and logical processors:
  Socket  0: [   0   6] [   1   7] [   2   8] [   3   9] [   4  10] [   5  
11]

if i want to run two gmx jobs simultaneously on this one node,i usually do 
something like:
promt> gmx mdrun -deffnm name1 -nt 6 -pin on -ntmpi 1 -ntomp 6 > er1 2>&1 &
promt> gmx mdrun -deffnm name2 -nt 6 -pin on -ntmpi 1 -ntomp 6 > er2 2>&1 &

given the above numbers what are the best choices for: pinoffset and 
pinstride?i might be too dumb, but what the doc and the gmx webpage say about 
these options is not clear to me ..
if i let gmx decide it does not necessarily seem to make the best choice in 
termsof the resulting performance, which is not surprising as neither of the 
two jobs knowsabout the presence of the other one...

thanks!michael








=== Why be happy when you could be normal?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] parallelization

2018-03-02 Thread Amin Rouy
I see, thank you.

On Fri, Mar 2, 2018 at 3:09 PM, Mark Abraham 
wrote:

> Hi,
>
> No, GROMACS long pre-dates useful implementations of MPI I/O (which anyway
> don't suit GROMACS needs), and handles its own MPI reduction and does I/O
> from a single rank per simulation.
>
> Mark
>
> On Fri, Mar 2, 2018 at 3:00 PM Amin Rouy  wrote:
>
> > sorry Justin, I am familiar with this information in the link. But I do
> not
> > know  * I/O*   and   *MPI-IO*   which is not in the link ?
> > and I was asked from our HPC if I use them in Gromacs.
> >
> >
> >
> > On Fri, Mar 2, 2018 at 2:43 PM, Justin Lemkul  wrote:
> >
> > >
> > >
> > > On 3/2/18 8:40 AM, Amin Rouy wrote:
> > >
> > >> Hi
> > >>
> > >> I am not so familiar with parallelization. Can some one please tell me
> > if
> > >> Gromacs
> > >> use MPI-parallel I/O (MPI-IO), or one should do it by himself for his
> > MPI
> > >> jobs?
> > >>
> > >
> > > Everything you need to know is in the manual:
> > >
> > > http://manual.gromacs.org/documentation/current/user-guide/m
> > > drun-performance.html
> > >
> > > -Justin
> > >
> > > --
> > > ==
> > >
> > > Justin A. Lemkul, Ph.D.
> > > Assistant Professor
> > > Virginia Tech Department of Biochemistry
> > >
> > > 303 Engel Hall
> > > 340 West Campus Dr.
> > > Blacksburg, VA 24061
> > >
> > > jalem...@vt.edu | (540) 231-3129
> > > http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
> > >
> > > ==
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] parallelization

2018-03-02 Thread Mark Abraham
Hi,

No, GROMACS long pre-dates useful implementations of MPI I/O (which anyway
don't suit GROMACS needs), and handles its own MPI reduction and does I/O
from a single rank per simulation.

Mark

On Fri, Mar 2, 2018 at 3:00 PM Amin Rouy  wrote:

> sorry Justin, I am familiar with this information in the link. But I do not
> know  * I/O*   and   *MPI-IO*   which is not in the link ?
> and I was asked from our HPC if I use them in Gromacs.
>
>
>
> On Fri, Mar 2, 2018 at 2:43 PM, Justin Lemkul  wrote:
>
> >
> >
> > On 3/2/18 8:40 AM, Amin Rouy wrote:
> >
> >> Hi
> >>
> >> I am not so familiar with parallelization. Can some one please tell me
> if
> >> Gromacs
> >> use MPI-parallel I/O (MPI-IO), or one should do it by himself for his
> MPI
> >> jobs?
> >>
> >
> > Everything you need to know is in the manual:
> >
> > http://manual.gromacs.org/documentation/current/user-guide/m
> > drun-performance.html
> >
> > -Justin
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Assistant Professor
> > Virginia Tech Department of Biochemistry
> >
> > 303 Engel Hall
> > 340 West Campus Dr.
> > Blacksburg, VA 24061
> >
> > jalem...@vt.edu | (540) 231-3129
> > http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
> >
> > ==
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/Support
> > /Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] parallelization

2018-03-02 Thread Justin Lemkul



On 3/2/18 8:40 AM, Amin Rouy wrote:

Hi

I am not so familiar with parallelization. Can some one please tell me if
Gromacs
use MPI-parallel I/O (MPI-IO), or one should do it by himself for his MPI
jobs?


Everything you need to know is in the manual:

http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Sorry for the confusion. My fault...
I saw my previous post and found that I missed something. In fact, I couldn't 
run "-pme gpu".

So, once again, I ran all the commands and uploaded the log files


gmx mdrun -nobackup -nb cpu -pme cpu -deffnm md_0_1
https://pastebin.com/RNT4XJy8


gmx mdrun -nobackup -nb cpu -pme gpu -deffnm md_0_1
https://pastebin.com/7BQn8R7g
This run shows an error on the screen which is not shown in the log file. So 
please also see https://pastebin.com/KHg6FkBz



gmx mdrun -nobackup -nb gpu -pme cpu -deffnm md_0_1
https://pastebin.com/YXYj23tB



gmx mdrun -nobackup -nb gpu -pme gpu -deffnm md_0_1
https://pastebin.com/P3X4mE5y





From the results, it seems that running the pme on the cpu is better than gpu. 
The fastest command here is -nb gpu -pme cpu


Still I have the question that while GPU is utilized, the CPU is also busy. So, 
I was thinking that the source code uses cudaDeviceSynchronize() where the CPU 
enters a busy loop.



Regards,
Mahmood






On Friday, March 2, 2018, 3:24:41 PM GMT+3:30, Szilárd Páll 
 wrote: 





Once again, full log files, please, not partial cut-and-paste, please.

Also, you misread something because your previous logs show:
-nb cpu -pme gpu: 56.4 ns/day
-nb cpu -pme gpu -pmefft cpu 64.6 ns/day
-nb cpu -pme cpu 67.5 ns/day

So both mixed mode PME and PME on CPU are faster, the latter slightly faster 
than the former.

This is about as much as you can do, I think. Your GPU is just too slow to get 
more performance out of it and the runs are GPU-bound. You might be able to get 
a bit more performance with some tweaks (compile mdrun with AVX2_256, use a 
newer fftw, use a newer gcc), but expect marginal gains.

Cheers,

--
Szilárd


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
Once again, full log files, please, not partial cut-and-paste, please.

Also, you misread something because your previous logs show:
-nb cpu -pme gpu: 56.4 ns/day
-nb cpu -pme gpu -pmefft cpu 64.6 ns/day
-nb cpu -pme cpu 67.5 ns/day

So both mixed mode PME and PME on CPU are faster, the latter slightly
faster than the former.

This is about as much as you can do, I think. Your GPU is just too slow to
get more performance out of it and the runs are GPU-bound. You might be
able to get a bit more performance with some tweaks (compile mdrun with
AVX2_256, use a newer fftw, use a newer gcc), but expect marginal gains.

Cheers,

--
Szilárd

On Fri, Mar 2, 2018 at 11:00 AM, Mahmood Naderan 
wrote:

> Command is "gmx mdrun -nobackup -pme cpu -nb gpu -deffnm md_0_1" and the
> log says
>
>  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
>
> On 1 MPI rank, each using 16 OpenMP threads
>
>  Computing:  Num   Num  CallWall time Giga-Cycles
>  Ranks Threads  Count  (s) total sum%
> 
> -
>  Neighbor search1   16501   0.972 55.965   0.8
>  Launch GPU ops.1   16  50001   2.141123.301   1.7
>  Force  1   16  50001   4.019231.486   3.1
>  PME mesh   1   16  50001  40.695   2344.171  31.8
>  Wait GPU NB local  1   16  50001  60.155   3465.079  47.0
>  NB X/F buffer ops. 1   16  99501   7.342422.902   5.7
>  Write traj.1   16 11   0.246 14.184   0.2
>  Update 1   16  50001   3.480200.461   2.7
>  Constraints1   16  50001   5.831335.878   4.6
>  Rest   3.159181.963   2.5
> 
> -
>  Total128.039   7375.390 100.0
> 
> -
>  Breakdown of PME mesh computation
> 
> -
>  PME spread 1   16  50001  17.086984.209  13.3
>  PME gather 1   16  50001  12.534722.007   9.8
>  PME 3D-FFT 1   16 12   9.956573.512   7.8
>  PME solve Elec 1   16  50001   0.779 44.859   0.6
> 
> -
>
>Core t (s)   Wall t (s)(%)
>Time: 2048.617  128.039 1600.0
>  (ns/day)(hour/ns)
> Performance:   67.4810.356
>
>
>
>
>
>
> While the command is "", I see that the gpu is utilized about 10% and the
> log file says:
>
>  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
>
> On 1 MPI rank, each using 16 OpenMP threads
>
>  Computing:  Num   Num  CallWall time Giga-Cycles
>  Ranks Threads  Count  (s) total sum%
> 
> -
>  Neighbor search1   16   1251   6.912398.128   2.3
>  Force  1   16  50001 210.689  12135.653  70.4
>  PME mesh   1   16  50001  46.869   2699.656  15.7
>  NB X/F buffer ops. 1   16  98751  22.315   1285.360   7.5
>  Write traj.1   16 11   0.216 12.447   0.1
>  Update 1   16  50001   4.382252.386   1.5
>  Constraints1   16  50001   6.035347.601   2.0
>  Rest   1.666 95.933   0.6
> 
> -
>  Total299.083  17227.165 100.0
> 
> -
>  Breakdown of PME mesh computation
> 
> -
>  PME spread 1   16  50001  21.505   1238.693   7.2
>  PME gather 1   16  50001  12.089696.333   4.0
>  PME 3D-FFT 1   16 12  11.627669.705   3.9
>  PME solve Elec 1   16  50001   0.965 55.598   0.3
> 
> -
>
>Core t (s)   Wall t (s)(%)
>Time: 4785.326  299.083 1600.0
>  (ns/day)(hour/ns)
> Performance:   28.8890.831
>
>
>
>
> Using GPU is still better than using CPU alone. However, I see that while
> GPU is utilized, the 

Re: [gmx-users] QM/MM optimization in gromacs/gaussian

2018-03-02 Thread Groenhof, Gerrit
Hi,

Could the rupture suggest that perhaps the underlying QM/MM model is somewhat 
flawed?

What happens when you run the minimisation without the constraint between the 
QM and MM atoms? Do you have convergence problems?

Gerrit




Message: 2
Date: Fri, 2 Mar 2018 13:01:09 +0300
From: nikol...@spbau.ru
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] QM/MM optimization in gromacs/gaussian
(nikol...@spbau.ru)
Message-ID: <40fa81b002d9ded18e6f471f19a1ada7.squir...@mail.spbau.ru>
Content-Type: text/plain;charset=UTF-8

And which one (and with what parameters) is better for the QM/MM
optimization?

Steep almost didn't change anything and I couldn't relax the system as I
wanted.
CG seemed to work fine, but in the end it teared an H-atom from my QM
subsystem.
BFGS could not work with the constraints which are necessary for the QM/MM
calculations in gromacs...




>
> Hi,
>
> It is possible, but only using gromacs' internal optizers: SD(steep), CG,
> or BFGS.
> And you can only optimise minima, not transition states.
>
> Best,
> Gerrit
>
>
>
>
>
>
>
> Message: 3
> Date: Thu, 1 Mar 2018 12:44:53 +0300
> From: nikol...@spbau.ru
> To: gmx-us...@gromacs.org
> Subject: [gmx-users] QM/MM optimization in gromacs/gaussian
> Message-ID: 
> Content-Type: text/plain;charset=UTF-8
>
> Dear all!
>
> I need to perform the QM/MM optimization in the Gromacs/Gaussian
> interface. However, I know that in 2015 this was not possible.
>
> The question: is there such an opportunity nowadays (I use gromacs 5.1.2)
> and which kind of parameters I need to write in the .mdp file in order to
> obtain such an optimization?
>
> Thank you in advance,
> Dmitrii
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] QM/MM optimization in gromacs/gaussian (nikol...@spbau.ru)

2018-03-02 Thread nikolaev
And which one (and with what parameters) is better for the QM/MM
optimization?

Steep almost didn't change anything and I couldn't relax the system as I
wanted.
CG seemed to work fine, but in the end it teared an H-atom from my QM
subsystem.
BFGS could not work with the constraints which are necessary for the QM/MM
calculations in gromacs...




>
> Hi,
>
> It is possible, but only using gromacs' internal optizers: SD(steep), CG,
> or BFGS.
> And you can only optimise minima, not transition states.
>
> Best,
> Gerrit
>
>
>
>
>
>
>
> Message: 3
> Date: Thu, 1 Mar 2018 12:44:53 +0300
> From: nikol...@spbau.ru
> To: gmx-us...@gromacs.org
> Subject: [gmx-users] QM/MM optimization in gromacs/gaussian
> Message-ID: 
> Content-Type: text/plain;charset=UTF-8
>
> Dear all!
>
> I need to perform the QM/MM optimization in the Gromacs/Gaussian
> interface. However, I know that in 2015 this was not possible.
>
> The question: is there such an opportunity nowadays (I use gromacs 5.1.2)
> and which kind of parameters I need to write in the .mdp file in order to
> obtain such an optimization?
>
> Thank you in advance,
> Dmitrii
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.
>


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Command is "gmx mdrun -nobackup -pme cpu -nb gpu -deffnm md_0_1" and the log 
says

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 16 OpenMP threads

 Computing:  Num   Num  Call    Wall time Giga-Cycles
 Ranks Threads  Count  (s) total sum    %
-
 Neighbor search    1   16    501   0.972 55.965   0.8
 Launch GPU ops.    1   16  50001   2.141    123.301   1.7
 Force  1   16  50001   4.019    231.486   3.1
 PME mesh   1   16  50001  40.695   2344.171  31.8
 Wait GPU NB local  1   16  50001  60.155   3465.079  47.0
 NB X/F buffer ops. 1   16  99501   7.342    422.902   5.7
 Write traj.    1   16 11   0.246 14.184   0.2
 Update 1   16  50001   3.480    200.461   2.7
 Constraints    1   16  50001   5.831    335.878   4.6
 Rest   3.159    181.963   2.5
-
 Total    128.039   7375.390 100.0
-
 Breakdown of PME mesh computation
-
 PME spread 1   16  50001  17.086    984.209  13.3
 PME gather 1   16  50001  12.534    722.007   9.8
 PME 3D-FFT 1   16 12   9.956    573.512   7.8
 PME solve Elec 1   16  50001   0.779 44.859   0.6
-

   Core t (s)   Wall t (s)    (%)
   Time: 2048.617  128.039 1600.0
 (ns/day)    (hour/ns)
Performance:   67.481    0.356






While the command is "", I see that the gpu is utilized about 10% and the log 
file says:

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 16 OpenMP threads

 Computing:  Num   Num  Call    Wall time Giga-Cycles
 Ranks Threads  Count  (s) total sum    %
-
 Neighbor search    1   16   1251   6.912    398.128   2.3
 Force  1   16  50001 210.689  12135.653  70.4
 PME mesh   1   16  50001  46.869   2699.656  15.7
 NB X/F buffer ops. 1   16  98751  22.315   1285.360   7.5
 Write traj.    1   16 11   0.216 12.447   0.1
 Update 1   16  50001   4.382    252.386   1.5
 Constraints    1   16  50001   6.035    347.601   2.0
 Rest   1.666 95.933   0.6
-
 Total    299.083  17227.165 100.0
-
 Breakdown of PME mesh computation
-
 PME spread 1   16  50001  21.505   1238.693   7.2
 PME gather 1   16  50001  12.089    696.333   4.0
 PME 3D-FFT 1   16 12  11.627    669.705   3.9
 PME solve Elec 1   16  50001   0.965 55.598   0.3
-

   Core t (s)   Wall t (s)    (%)
   Time: 4785.326  299.083 1600.0
 (ns/day)    (hour/ns)
Performance:   28.889    0.831




Using GPU is still better than using CPU alone. However, I see that while GPU 
is utilized, the CPU is also busy. So, I was thinking that the source code uses 
cudaDeviceSynchronize() where the CPU enters a busy loop.

Regards,
Mahmood 

On Friday, March 2, 2018, 11:37:11 AM GMT+3:30, Magnus Lundborg 
 wrote:  
 
 Have you tried the mdrun options:

-pme cpu -nb gpu
-pme cpu -nb cpu

Cheers,

Magnus

  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Magnus Lundborg

Have you tried the mdrun options:

-pme cpu -nb gpu
-pme cpu -nb cpu

Cheers,

Magnus

On 2018-03-02 07:55, Mahmood Naderan wrote:

If you mean [1], then yes I read that and that recommends to use Verlet for the 
new algorithm depicted in  figures. At least that is my understanding about 
offloading. If I read the wrong document or you mean there is also some other 
options, please let me know.

[1] http://www.gromacs.org/GPU_acceleration





Regards,
Mahmood

 On Thursday, March 1, 2018, 6:35:46 PM GMT+3:30, Szilárd Páll 
 wrote:
  
  Have you read the "Types of GPU tasks" section of the user guide?


--
Szilárd
   



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.