Hi Rajat,
If you remove comm on the bilayer, there may be relative comm between
leaflets. If that relative motion is significant and you switch to removing
comm per leaflet, the program suddenly finds itself resetting the com over
a large distance. About equilibration, you equilibrated with
Hi Tsjerk,
That was very sage advice! Thank you. I will try regenerating velocities
and see if the motion goes away...
On Wed, Nov 13, 2013 at 2:00 PM, Tsjerk Wassenaar tsje...@gmail.com wrote:
Hi Rajat,
If you remove comm on the bilayer, there may be relative comm between
leaflets. If that
An update to anyone interested:
Regenerating velocities by itself did not solve the problem. I had to
regenerate velocities and couple the upper and lower leaflets separately to
the thermostat to equilibrate the system. To smoothen the equilibration
process further, I used a 0.5 fs timestep
Sorry, I attached the wrong file . Here's the average file generate from
one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-
Hi,
I tried g_select to dump the structure with the interacting water
molecules, but I don't know know how to do that. I searched for some
threads in the discussion but wasn't able to find anything related to my
need. Can you explain how can I do that ?
On Tue, Nov 12, 2013 at 7:39 AM, bharat
Thank you so much Justin. On the one hand, I feel dumb because I could have
sworn that I was using a clean build directory. On the other hand, I
obviously lost track of what I was doing because your suggestion worked like
a charm!
Koln
--
View this message in context:
On 11/11/13 12:08 PM, Williams Ernesto Miranda Delgado wrote:
Hello
If I did the MD simulation using PME and neutralized with ions, and I want
to rerun this time with reaction field zero, is there any problem if I
keep the ions? This is for LIE calculation. I am using AMBER99SB.
Why do you
There are no problems to have ions while using Reaction-Field treatment.
Dr. Vitaly V. Chaban
On Mon, Nov 11, 2013 at 7:06 PM, Justin Lemkul jalem...@vt.edu wrote:
On 11/11/13 12:08 PM, Williams Ernesto Miranda Delgado wrote:
Hello
If I did the MD simulation using PME and neutralized
On 11/11/13 5:39 PM, bharat gupta wrote:
Sorry, I attached the wrong file . Here's the average file generate from
one of the files I sent in my last mail. I used the command g_analyze -f
hbond_115-water.xvg -av hbond_115-water-avg.xvg. Here's the file obtained
from this command :-
On 11/11/13 6:56 PM, bharat gupta wrote:
Hi,
I tried g_select to dump the structure with the interacting water
molecules, but I don't know know how to do that. I searched for some
threads in the discussion but wasn't able to find anything related to my
need. Can you explain how can I do that
Thanks justin for your replies. I understood the g_analyze related data. I
tired g_analyze to dump the structures as you said. But, I didn't find any
switch that can be used to dump the structure in pdb format.
On Tue, Nov 12, 2013 at 10:15 PM, Justin Lemkul jalem...@vt.edu wrote:
On
On 11/12/13 8:33 AM, bharat gupta wrote:
Thanks justin for your replies. I understood the g_analyze related data. I
tired g_analyze to dump the structures as you said. But, I didn't find any
switch that can be used to dump the structure in pdb format.
Because that's not the function of
.
--
Message: 2
Date: Mon, 11 Nov 2013 13:06:32 -0500
From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Re: Reaction field zero and ions
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 52811ca8.5030...@vt.edu
Content-Type: text
6h13:18
(ns/day)(hour/ns)
Performance: 38.5730.622
--
View this message in context:
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
Sent
I run
grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr
and everything looks fine. I check the nvt.tpr, and temperature is ok.
the real problem is with the mdrun function.
could be a problem of the software?
Thanks
Javier
Justin Lemkul wrote
On 11/11/13 11:24 AM, Carlos
Hello Sir,
Thanks for the reply.
now is it fine if i use 100 threads in my restart.
is there any impact on the over all simulation?
--
View this message in context:
http://gromacs.5086.x6.nabble.com/Restarting-a-simulation-after-replacing-an-empty-md-trr-file-tp5012443p5012459.html
Sent from
On 11/12/13 10:58 AM, cjalmeciga wrote:
I run
grompp -f nvt.mdp -c em.gro -p topol.top -n index.ndx -o nvt.tpr
and everything looks fine. I check the nvt.tpr, and temperature is ok.
The fact that grompp completes indicates there is nothing syntactically wrong
with the input files.
On 11/12/13 11:10 AM, arunjones wrote:
Hello Sir,
Thanks for the reply.
now is it fine if i use 100 threads in my restart.
is there any impact on the over all simulation?
Only if that is the number of threads originally used in the run. If not, there
will be a mismatch between the DD grid
Thank you Sir,
initially i was running on 60 threads, now i changed it to 100. simulation
is running with out any error, but i found a note in the log file as fallows
#nodes mismatch,
current program: 100
checkpoint file: 60
#PME-nodes mismatch,
current program: -1
checkpoint
On 11/12/13 12:07 PM, arunjones wrote:
Thank you Sir,
initially i was running on 60 threads, now i changed it to 100. simulation
is running with out any error, but i found a note in the log file as fallows
#nodes mismatch,
current program: 100
checkpoint file: 60
#PME-nodes
The output of energy minimization was
Potential Energy = -1.42173622068236e+06
Maximum force = 9.00312066109319e+02 on atom 148
Norm of force = 2.06087515037187e+01
Thanks
Javier
Justin Lemkul wrote
On 11/12/13 10:58 AM, cjalmeciga wrote:
I run
grompp -f nvt.mdp -c em.gro -p
On 11/12/13 12:14 PM, cjalmeciga wrote:
The output of energy minimization was
Potential Energy = -1.42173622068236e+06
Maximum force = 9.00312066109319e+02 on atom 148
Norm of force = 2.06087515037187e+01
OK, reasonable enough. How about a description of what the system is,
Hi Justin,
Below I pasted .mdp file and topology. In .log file I could see energy term
for position restraints.
.mdp file---
title = NPT Equilibration
define = -DPOSRES ; position restraints for protein
; Run parameters
integrator = md
suggestion.
Thanks,
Dewey
--
View this message in context:
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012475.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing list
On 11/12/13 1:47 PM, Rama wrote:
Hi Justin,
Below I pasted .mdp file and topology. In .log file I could see energy term
for position restraints.
.mdp file---
title = NPT Equilibration
define = -DPOSRES ; position restraints for protein
; Run
Hi All,
Any suggestions?
Thanks,
On Mon, Nov 11, 2013 at 12:38 AM, rajat desikan rajatdesi...@gmail.comwrote:
Hi All,
I am experiencing a few problems in membrane simulations wrt COM removal.
I downloaded a 400 ns pre-equilibrated Slipid-DMPC membrane with all the
accompanying files. I
In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its final calculation gives 7.150
as the average
On 11/11/13 1:30 AM, bharat gupta wrote:
thank you informing about g_rdf...
Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??
g_select
On 11/11/13 4:06 AM, bharat gupta wrote:
In addition to my previous question, I have another question about
g_analyze. When I used the hbond.xvg file to get the average and plotted
the average.xvg file I found that the average value is round 4 to 5
according to the graph. But g_analyze in its
Hello
If I did the MD simulation using PME and neutralized with ions, and I want
to rerun this time with reaction field zero, is there any problem if I
keep the ions? This is for LIE calculation. I am using AMBER99SB.
Thanks
Williams
--
gmx-users mailing listgmx-users@gromacs.org
6h13:18
(ns/day)(hour/ns)
Performance: 38.5730.622
--
View this message in context:
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
Sent from
Thank you Justin for your kind help. The simple reason for considering only
gromos parameter sets is that the parameters for the metal ions (in my
protein) are not defined in other force fields.
On Sat, Nov 9, 2013 at 7:18 PM, Justin Lemkul [via GROMACS]
ml-node+s5086n5012376...@n6.nabble.com
On 11/10/13 12:20 AM, bharat gupta wrote:
Hi,
I used the command g_hbond to find h-bond between residues 115-118 and
water. Then I used g_analyze to find out the average and it gives the value
for the hbonds like this :-
std. dev.relative deviation of
I checked the file hbnum.xvg file and it contains three columns - time,
hbonds, hbonds that donot follow the angle criteria. In that case SS1 is
the average of actual hbonds (2nd column ) and SS2 is the average of 3rd
column. Am I right here or not ??
I tried to calculate the h-bond for residues
Justin, thank you very much for your kind help about LIE and PME
Williams
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post
On 11/10/13 7:18 PM, bharat gupta wrote:
I checked the file hbnum.xvg file and it contains three columns - time,
hbonds, hbonds that donot follow the angle criteria. In that case SS1 is
The third column is not actually H-bonds, then ;)
the average of actual hbonds (2nd column ) and SS2 is
Thanks for your reply. I was missing the scientific notation part. Now
everything is fine.
Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
to protein.
On Mon, Nov 11, 2013 at 10:12 AM, Justin Lemkul jalem...@vt.edu wrote:
On 11/10/13 7:18 PM, bharat gupta wrote:
On 11/10/13 8:30 PM, bharat gupta wrote:
Thanks for your reply. I was missing the scientific notation part. Now
everything is fine.
Regarding trjorder, it doesn't measure h-bonds but gives the water nearest
to protein.
I wouldn't try to draw any sort of comparison between the output of
But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??
On Mon, Nov 11, 2013 at 10:35 AM, Justin Lemkul jalem...@vt.edu wrote:
On 11/10/13 8:30 PM, bharat gupta wrote:
Thanks for your reply. I was missing the scientific notation part. Now
everything
On 11/10/13 8:38 PM, bharat gupta wrote:
But trjorder can be used to calculate the hydration layer or shell around
residues ... Right ??
Yes, but I also tend to think that integrating an RDF is also a more
straightforward way of doing that. With trjorder, you set some arbitrary cutoff
thank you informing about g_rdf...
Is it possible to dump the structure with those average water molecules
interacting with the residues. I generated the hbond.log file which gives
the details but I need to generate a figure for this ??
On Mon, Nov 11, 2013 at 10:40 AM, Justin Lemkul
On 11/9/13 12:48 AM, pratibha wrote:
Sorry for the previous mistake. Instead of 53a7, the force field which I
used was 53a6.
53A6 is known to under-stabilize helices, so if a helix did not appear in a
simulation using this force field, it is not definitive proof that the structure
does
On 11/8/13 3:32 PM, Williams Ernesto Miranda Delgado wrote:
Greetings again
If I use a salt concentration for neutralizing the protein-ligand complex
and run MD using PME, and the ligand is neutral, do I perform ligand MD
simulation without adding any salt concentration? It could be relevant
Hi Justin,
I take it that both the sets of parameters should produce identical
macroscopic quantities.
For the GPU, is this a decent .mdp?
cutoff-scheme= Verlet
vdwtype = switch
rlist= 1.2
;rlistlong = 1.4 NOT USED IN GPU...IS
On 11/9/13 4:16 PM, rajat desikan wrote:
Hi Justin,
I take it that both the sets of parameters should produce identical
macroscopic quantities.
For the GPU, is this a decent .mdp?
cutoff-scheme= Verlet
vdwtype = switch
rlist= 1.2
;rlistlong =
On Sat, 9 Nov 2013, Gianluca Interlandi wrote:
Just to chime in. Here is a that paper might be helpful in understanding
the role of cuoffs in the CHARMM force field:
AU STEINBACH, PJ
BROOKS, BR
AF STEINBACH, PJ
BROOKS, BR
TI NEW SPHERICAL-CUTOFF METHODS FOR LONG-RANGE FORCES IN
On 11/9/13 9:51 PM, Gianluca Interlandi wrote:
On Sat, 9 Nov 2013, Gianluca Interlandi wrote:
Just to chime in. Here is a that paper might be helpful in understanding the
role of cuoffs in the CHARMM force field:
AU STEINBACH, PJ
BROOKS, BR
AF STEINBACH, PJ
BROOKS, BR
TI NEW
in context:
http://gromacs.5086.x6.nabble.com/mdrun-on-8-core-AMD-GTX-TITAN-was-Re-gmx-users-Re-Gromacs-4-6-on-two-Titans-GPUs-tp5012330p5012391.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org
Hi,
I used the command g_hbond to find h-bond between residues 115-118 and
water. Then I used g_analyze to find out the average and it gives the value
for the hbonds like this :-
std. dev.relative deviation of
standard -
On 11/7/13 11:32 PM, Rajat Desikan wrote:
Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html
Is there any reason not to use their
Dear Kieu Thu
Thanks for your comment about free energy. Unfortunately, I could not send a
email to Paissoni Cristina in the Gromacs Forum.
Could you give me email address of Paissoni Cristina? Finding a tool for
calculation MM/PBSA with Gromacs is very vital for me.
Best Regards
Kiana
--
View
Greetings again
If I use a salt concentration for neutralizing the protein-ligand complex
and run MD using PME, and the ligand is neutral, do I perform ligand MD
simulation without adding any salt concentration? It could be relevant for
LIE free energy calculation if I don't include salt in ligand
Sorry for the previous mistake. Instead of 53a7, the force field which I
used was 53a6.
On Fri, Nov 8, 2013 at 12:10 AM, Justin Lemkul [via GROMACS]
ml-node+s5086n5012325...@n6.nabble.com wrote:
On 11/7/13 12:14 PM, pratibha wrote:
My protein contains metal ions which are parameterized
First, there is no value in ascribing problems to the hardware if the
simulation setup is not yet balanced, or not large enough to provide enough
atoms and long enough rlist to saturate the GPUs, etc. Look at the log
files and see what complaints mdrun makes about things like PME load
balance, and
On Wed, Nov 6, 2013 at 4:07 PM, fantasticqhl fantastic...@gmail.com wrote:
Dear Justin,
I am sorry for the late reply. I still can't figure it out.
It isn't rocket science - your two .mdp files describe totally different
model physics. To compare things, change as few things as necessary to
Dear All,
Any suggestions?
Thank you.
--
View this message in context:
http://gromacs.5086.x6.nabble.com/CHARMM-mdp-settings-for-GPU-tp5012267p5012316.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
--
gmx-users mailing listgmx-users@gromacs.org
Hi,
It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a grain of salt for use
with PME - others' success in practice should be a guideline here. The good
news is that the default GROMACS PME settings are pretty good for
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail which out of the two
simulations should I consider more
On 11/7/13 12:14 PM, pratibha wrote:
My protein contains metal ions which are parameterized only in gromos force
field. Since I am a newbie to MD simulations, it would be difficult for me
to parameterize those myself.
Can you please guide me as per my previous mail which out of the two
Hello
I performed MD simulations of several Protein-ligand complexes and
solvated Ligands using PME for log range electrostatics. I want to
calculate the binding free energy using the LIE method, but when using
g_energy I only get Coul-SR. How can I deal with Ligand-environment long
range
Thank you, Mark. I think that running it on CPUs is a safer choice at
present.
On Thu, Nov 7, 2013 at 9:41 PM, Mark Abraham mark.j.abra...@gmail.comwrote:
Hi,
It's not easy to be explicit. CHARMM wasn't parameterized with PME, so the
original paper's coulomb settings can be taken with a
Reasonable, but CPU-only is not 100% conforming either; IIRC the CHARMM
switch differs from the GROMACS switch (Justin linked a paper here with the
CHARMM switch description a month or so back, but I don't have that link to
hand).
Mark
On Thu, Nov 7, 2013 at 8:45 PM, rajat desikan
If the long-range component of your electrostatics model is not
decomposable by group (which it isn't), then you can't use that with LIE.
See the hundreds of past threads on this topic :-)
Mark
On Thu, Nov 7, 2013 at 8:34 PM, Williams Ernesto Miranda Delgado
wmira...@fbio.uh.cu wrote:
Hello
Hi Mark!
I think that this is the paper that you are referring to:
dx.doi.org/10.1021/ct900549r
Also for your reference, these are the settings that Justin recommended
using with CHARMM in gromacs:
vdwtype = switch
rlist = 1.2
rlistlong = 1.4
rvdw = 1.2
rvdw-switch = 1.0
rcoulomb = 1.2
As
Let's not hijack James' thread as your hardware is different from his.
On Tue, Nov 5, 2013 at 11:00 PM, Dwey Kauffman mpi...@gmail.com wrote:
Hi Szilard,
Thanks for your suggestions. I am indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See
On Thu, Nov 7, 2013 at 6:34 AM, James Starlight jmsstarli...@gmail.com wrote:
I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
me the same performance
mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test,
mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v -deffnm
Thank you Mark
What do you think about making a rerun on the trajectories generated
previously with PME but this time using coulombtype: cut-off? Could you
suggest a cut off value?
Thanks again
Williams
--
gmx-users mailing listgmx-users@gromacs.org
I'd at least use RF! Use a cut-off consistent with the force field
parameterization. And hope the LIE correlates with reality!
Mark
On Nov 7, 2013 10:39 PM, Williams Ernesto Miranda Delgado
wmira...@fbio.uh.cu wrote:
Thank you Mark
What do you think about making a rerun on the trajectories
Dear All,
The setting that I mentioned above are from Klauda et al., for a POPE
membrane system. They can be found in charmm_npt.mdp in lipidbook (link
below)
http://lipidbook.bioch.ox.ac.uk/package/show/id/48.html
Is there any reason not to use their .mdp parameters for a membrane-protein
Hi Dwey,
On 05/11/13 22:00, Dwey Kauffman wrote:
Hi Szilard,
Thanks for your suggestions. I am indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.
### 8 core
On 11/5/13 7:14 PM, Stephanie Teich-McGoldrick wrote:
Message: 5
Date: Mon, 04 Nov 2013 13:32:52 -0500
From: Justin Lemkul jalem...@vt.edu
Subject: Re: [gmx-users] Analysis tools and triclinic boxes
To: Discussion list for GROMACS users gmx-users@gromacs.org
Message-ID: 5277e854.9000...@vt.edu
Dear Justin,
I am sorry for the late reply. I still can't figure it out.
Could you please send me the mdp file which was used for your single point
calculations.
I want to do some comparison and then solve the problem.
Thanks very much!
All the best,
Qinghua
--
View this message in context:
I've gone to conclusion that simulation with 1 or 2 GPU simultaneously gave
me the same performance
mdrun -ntmpi 2 -ntomp 6 -gpu_id 01 -v -deffnm md_CaM_test,
mdrun -ntmpi 2 -ntomp 6 -gpu_id 0 -v -deffnm md_CaM_test,
Doest it be due to the small CPU cores or addition RAM ( this system has 32
Hi,
I am getting the following error while using the command -
[root@localhost INGT]# mpirun -np 24 mdrun_mpi -v -deffnm npt
Error -
/usr/bin/mpdroot: open failed for root's mpd conf
filempiexec_localhost.localdomain (__init__ 1208): forked process failed;
status=255
I complied gromacs using
I have given my .mdp file,
; title = trp_drg
warning = 10
cpp = /usr/bin/cpp
define = -DPOSRES
constraints = all-bonds
integrator = md
dt = 0.002 ; ps !
nsteps = 100 ; total 2000.0 ps.
On 11/5/13 7:19 AM, Kalyanashis wrote:
I have given my .mdp file,
; title = trp_drg
warning = 10
cpp = /usr/bin/cpp
define = -DPOSRES
constraints = all-bonds
integrator = md
dt = 0.002 ; ps !
nsteps
Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.
thanks
j.rahrow
On Thu, Oct 31, 2013 at 12:47 PM, J Alizadeh j.alizade...@gmail.com wrote:
Hi,
I need to replace an atom with another in the
29420 Atoms with a some tuning of the write out and communication intervals:
nodes again: 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs @ 4fs vsites
1 node 212 ns/day
2 nodes 295 ns/day
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
*
On 11/5/13 10:34 AM, J Alizadeh wrote:
Hi,
I need to replace an atom with another in the considered system.
I'd like to know if it is possible and if so what changes I need to do.
The coordinate file replacement is trivial. Just open the file in a text editor
and repname the atom. The
You need to configure your MPI environment to do so (so read its docs).
GROMACS can only do whatever that makes available.
Mark
On Tue, Nov 5, 2013 at 2:16 AM, bharat gupta bharat.85.m...@gmail.comwrote:
Hi,
I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
32
Timo,
Have you used the default settings, that is one rank/GPU? If that is
the case, you may want to try using multiple ranks per GPU, this can
often help when you have 4-6 cores/GPU. Separate PME ranks are not
switched on by default with GPUs, have you tried using any?
Cheers,
--
Szilárd Páll
Hi Mike,
I have similar configurations except a cluster of AMD-based linux
platforms with 2 GPU cards.
Your suggestion works. However, the performance of 2 GPU discourages
me because , for example, with 1 GPU, our computer node can easily
obtain a simulation of 31ns/day for a protein
Hi Timo,
Can you provide a benchmark with 1 Xeon E5-2680 with 1 Nvidia
k20x GPGPU on the same test of 29420 atoms ?
Are these two GPU cards (within the same node) connected by a SLI (Scalable
Link Interface) ?
Thanks,
Dwey
--
View this message in context:
Hi Dwey,
First and foremost, make sure to read the
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
page, in particular the Multiple MPI ranks per GPU section which
applies in your case.
Secondly, please do post log files (pastebin is your friend), the
performance table at
On Tue, Nov 5, 2013 at 9:55 PM, Dwey Kauffman mpi...@gmail.com wrote:
Hi Timo,
Can you provide a benchmark with 1 Xeon E5-2680 with 1 Nvidia
k20x GPGPU on the same test of 29420 atoms ?
Are these two GPU cards (within the same node) connected by a SLI (Scalable
Link Interface) ?
Hi Szilard,
Thanks for your suggestions. I am indeed aware of this page. In a 8-core
AMD with 1GPU, I am very happy about its performance. See below. My
intention is to obtain a even better one because we have multiple nodes.
### 8 core AMD with 1 GPU,
Force evaluation time GPU/CPU: 4.006
Hi Szilard,
Thanks.
From Timo's benchmark,
1 node142 ns/day
2 nodes FDR14 218 ns/day
4 nodes FDR14 257 ns/day
8 nodes FDR14 326 ns/day
It looks like a infiniband network is required in order to scale up when
running a task across nodes. Is it correct ?
Dwey
--
View
Sent: Thursday, 31 October 2013 1:52 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] RE: Gibbs Energy Calculation and charges
I likely won't have much time to look at it tonight, but you can see
exactly what the option is doing to the topology. run gmxdump on the
tpr
Thanks for the suggestion Chris. Had a quick look and can't see easily how to
do this, but I think I am at a point now where it is not an issue and don't
have to actually do this.
Catch ya,
Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences,
Sonawani-Jagtap)
7. Re: Gentle heating with implicit solvent (Gianluca Interlandi)
--
Message: 1
Date: Mon, 4 Nov 2013 17:05:36 +0100
From: Mark Abraham mark.j.abra...@gmail.com
Subject: Re: [gmx-users] Re: Installation
Hi Szilárd and all,
Thanks very much for the information. I am more interested in getting
single simulations to go as fast as possible (within reason!) rather than
overall throughput. Would you expect that the more expensive dual
Xeon/Titan systems would perform better in this respect?
Cheers
Yes, that has been true for GROMACS for a few years. Low-latency
communication is essential if you want a whole MD step to happen in around
1ms wall time.
Mark
On Nov 5, 2013 11:24 PM, Dwey Kauffman mpi...@gmail.com wrote:
Hi Szilard,
Thanks.
From Timo's benchmark,
1 node142
Hi,
I want to know the exact way to calculate the density of water around
certain residues in my protein. I tried to calculate this by using
g_select, with the following command :-
g_select -f nvt.trr -s nvt.tpr -select Nearby water resname SOL and
within 0.5 of resnr 115 to 118 -os water.xvg
Hi,
I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
fine till .configure command, but I am getting error at the make command :-
Error:
[root@cluster gromacs-4.5.7]# make
/bin/sh ./config.status --recheck
running CONFIG_SHELL=/bin/sh /bin/sh
just a small benchmark...
each node - 2 x Xeon E5-2680v2 + 2 x NVIDIA K20X GPGPUs
42827 atoms - vsites - 4fs
1 node142 ns/day 2 nodes FDR14 218 ns/day
4 nodes FDR14 257 ns/day
8 nodes FDR14 326 ns/day
16 nodes FDR14 391 ns/day (global warming)
best,
timo
--
gmx-users mailing list
Brad,
These numbers seems rather low for a standard simulation setup! Did
you use a particularly long cut-off or short time-step?
Cheers,
--
Szilárd Páll
On Fri, Nov 1, 2013 at 6:30 PM, Brad Van Oosten bv0...@brocku.ca wrote:
Im not sure on the prices of these systems any more, they are
On Mon, Nov 4, 2013 at 12:01 PM, bharat gupta bharat.85.m...@gmail.comwrote:
Hi,
I am trying to install gromacs 4.5.7 on rocks cluster(6.0) and it works
fine till .configure command, but I am getting error at the make command :-
Error:
[root@cluster gromacs-4.5.7]#
Hi,
I have installed Gromcas 4.5.6 on Rocks cluster 6.0 andmy systme is having
32 processors (cpu). But while running the nvt equilibration step, it uses
only 1 cpu and the others remain idle. I have complied the Gromacs using
enable-mpi option. How can make the mdrun use all the 32 processors ??
That last procedure works. I really appreciate your help. The only other
question I have is related to the selection process. Is there a way to
select the oxygen atoms of water within a certain distance of a molecule, as
well as the corresponding hydrogen atoms on the water molecule? Right
On 11/3/13 7:12 AM, rankinb wrote:
That last procedure works. I really appreciate your help. The only other
question I have is related to the selection process. Is there a way to
select the oxygen atoms of water within a certain distance of a molecule, as
well as the corresponding hydrogen
1 - 100 of 6386 matches
Mail list logo