On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:
Dear Users,
I am running MD simulations of a protein-ligand system. Sometimes when i do
an mdrun, be it for the energy minimization or during the nvt and npt
equillibration or the actual md run step, sometimes the output files are
named in a very odd
Dear Dr Justin,
Much appreciation. You nailed it.
Kind regards.
On Tue, Nov 5, 2013 at 2:41 PM, Justin Lemkul jalem...@vt.edu wrote:
On 11/5/13 7:37 AM, MUSYOKA THOMMAS wrote:
Dear Users,
I am running MD simulations of a protein-ligand system. Sometimes when i
do
an mdrun, be it for the
On Oct 29, 2013 1:26 AM, Pavan Ghatty pavan.grom...@gmail.com wrote:
Now /afterok/ might not work since technically the job is killed due to
walltime limits - making it not ok.
Hence use -maxh!
Mark
So I suppose /afterany/ is a better
option. But I do appreciate your warning about spamming
I have need to collect 100ns but I can collect only ~1ns (1000steps) per
run. Since I dont have .trr files, I rely on .cpt files for restarts. For
example,
grompp -f md.mdp -c md_14.gro -t md_14.cpt -p system.top -o md_15
This runs into a problem when the run gets killed due to walltime limits.
On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:
I have need to collect 100ns but I can collect only ~1ns (1000steps) per
run. Since I dont have .trr files, I rely on .cpt files for restarts. For
example,
grompp -f md.mdp -c md_14.gro -t md_14.cpt -p system.top -o
Mark,
The problem with one .tpr file set for 100ns is that when job number (say)
4 hits the wall limit, it crashes and never gets a chance to submit the
next job. So it's not really automated.
Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
till job 4 ends. But the PBS
No this isn't a problem. You can use job names under the -hold_jid flag.
As long as you change the job name in the submit script between
submissions this isn't a problem. You could have a submit script for job 4
with -N md_job4 and -hold_jid md_job3 then change these to -N md_job5 and
-hold_jid
Aah yes of course. Thanks James.
On Mon, Oct 28, 2013 at 3:16 PM, jkrie...@mrc-lmb.cam.ac.uk wrote:
No this isn't a problem. You can use job names under the -hold_jid flag.
As long as you change the job name in the submit script between
submissions this isn't a problem. You could have a
You're welcome
On 28 Oct 2013, at 20:03, Pavan Ghatty pavan.grom...@gmail.com wrote:
Aah yes of course. Thanks James.
On Mon, Oct 28, 2013 at 3:16 PM, jkrie...@mrc-lmb.cam.ac.uk wrote:
No this isn't a problem. You can use job names under the -hold_jid flag.
As long as you change the
On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty pavan.grom...@gmail.comwrote:
Mark,
The problem with one .tpr file set for 100ns is that when job number (say)
4 hits the wall limit, it crashes and never gets a chance to submit the
next job. So it's not really automated.
That's why I
Now /afterok/ might not work since technically the job is killed due to
walltime limits - making it not ok. So I suppose /afterany/ is a better
option. But I do appreciate your warning about spamming the queue and yes I
will re-read PBS docs.
On Mon, Oct 28, 2013 at 5:11 PM, Mark Abraham
On 10/27/13 9:37 AM, Pavan Ghatty wrote:
Hello All,
Is there a way to make mdrun put out .cpt file with the same frequency as a
.xtc or .trr file. From here
http://www.gromacs.org/Documentation/How-tos/Doing_Restarts I see that we
can choose how often (time in mins) the .cpt file is written.
On 7/21/13 12:18 AM, Collins Nganou wrote:
Dear Users,
when trying to run the following command:
mdrun -v -deffnm protein-EM-solvated -c protein-EM-solvated.pdb
I have received the error below.
Reading file dna-EM-solvated.tpr, VERSION 4.5.5 (single precision)
Starting 2 threads
Perhaps you need a less prehistoric compiler. Or the affinity-setting bug
fix in 4.6.3. Or both.
On Jul 17, 2013 6:25 PM, Shi, Yu (shiy4) sh...@mail.uc.edu wrote:
Dear gmx-users,
My problem is weird.
My mdrun worked well using the old serial version 4.5.5 (about two years
ago). And I have
Hi,
I recommend to run the regressiontests. The simplest way is to build
GROMACS with cmake -DREGRESSIONTEST_DOWNLOAD, and run make check.
See
http://www.gromacs.org/Documentation/Installation_Instructions#.c2.a7_4.12._Testing_GROMACS_for_correctness
for more details.
Roland
On Thu, Jun 6,
No, the residue names are the those from the .top file. But that's not the
same as the moleculetypes. You have to change the residue names in the [
atoms ] section.
Cheers,
Tsjerk
On Sun, May 26, 2013 at 12:57 AM, Mark Abraham mark.j.abra...@gmail.comwrote:
AFAIK, the residue names in the
Great, thank you that did the trick. My fault for not realizing this
earlier.
Best,
Reid
On Sun, May 26, 2013 at 2:12 AM, Tsjerk Wassenaar tsje...@gmail.com wrote:
No, the residue names are the those from the .top file. But that's not the
same as the moleculetypes. You have to change the
AFAIK, the residue names in the mdrun output .gro file are those of the
structure file you gave to grompp.
Mark
On Sun, May 26, 2013 at 12:31 AM, Reid Van Lehn rvanl...@gmail.com wrote:
Hello,
I am simulating a lipid bilayer and wish to apply position restraints to
only a subset of the
On 5/23/13 12:53 PM, mu xiaojia wrote:
Dear users,
I have used gromacs a while, however, sometime, when I run it on
supercomputer-clusters, I saw mdrun will generate a lot of files with #,
which occupied a lot of space, does anyone know why and how to avoid it?
Thanks
example, my
Or take your backup life into your own hands and set the environment
variable GMX_MAXBACKUP=-1
Mark
On Thu, May 23, 2013 at 7:04 PM, Justin Lemkul jalem...@vt.edu wrote:
On 5/23/13 12:53 PM, mu xiaojia wrote:
Dear users,
I have used gromacs a while, however, sometime, when I run it on
On 5/13/13 6:41 AM, Francesco wrote:
Good morning all,
This morning I checked the output of a 8ns (4 x 2ns) of simulation and I
noticed a strange behaviour:
The fist two simulations (each 2ns) ended up correctly and they both
took 2h 06min to finish.
The second two were still running when the
thank you for the reply, I'm in contact with my admin and I hope that he
will tell me something soon.
One thing that I really don't understand is why only the last
nanoseconds are affected.
I run the same simulation (with the same paramenters) and I've never had
problems in the first 4 ns , only
On 5/6/13 9:39 PM, Andrew DeYoung wrote:
Hi,
I am running mdrun-gpu on Gromacs 4.5.5 (with OpenMM). This is my first
time using a GPU. I get the following error message when attempting to run
mdrun-gpu with my .tpr file:
---
Program
On 4/26/13 10:50 AM, Juliette N. wrote:
Hi all,
I am going to use 4.6 version of gmx on GPU. I am not sure of the mdrun
command though. I used to use mpirun -np 4 mdrun_mpi -deffnm .. in 4.5.4.
Can I use the same command line as before for mdrun or other tools?
Please read through the
On Thu, Apr 11, 2013 at 6:17 AM, manara r. (rm16g09) rm16...@soton.ac.ukwrote:
Dear gmx-users,
I am having a problem with a periodic molecule and the domain
decomposition, I wish to use a high number of processors (circa 180, but
can obviously be reduced) and therefore need to use the -rdd
On 11/04/2013 11:17, manara r. (rm16g09) rm16...@soton.ac.uk wrote:
Dear gmx-users,
I am having a problem with a periodic molecule and the domain
decomposition, I wish to use a high number of processors (circa 180, but
can obviously be reduced) and therefore need to use the -rdd or -dds
flags
On 3/12/13 5:14 AM, l@utwente.nl wrote:
Hallo Justin,
Thank you for your reply, I uploaded the images, Please find following the link
below,
start box:
...@gromacs.org] on behalf
of Justin Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash
On Monday, March 11, 2013, wrote:
Hallo Justin,
Thank you for your comments.
Taking your suggestions, I set
Lemkul [jalem...@vt.edu]
Sent: Monday, March 11, 2013 10:06 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash
On Monday, March 11, 2013, wrote:
Hallo Justin,
Thank you for your comments.
Taking your suggestions, I set nrexcl=1, and comment the [pairs] section
for GROMACS users
Subject: Re: [gmx-users] mdrun WARING and crash
On 2/28/13 6:59 AM, l@utwente.nl wrote:
Hallo Justin,
Thank you for you help. I have read the previous discussions on this topic,
which is very helpful.
The link is:
http://gromacs.5086.n6.nabble.com/What-is-the-purpose
Subject: Re: [gmx-users] mdrun WARING and crash
On 2/28/13 6:59 AM, l@utwente.nl wrote:
Hallo Justin,
Thank you for you help. I have read the previous discussions on this
topic, which is very helpful.
The link is:
http://gromacs.5086.n6.nabble.com/What-is-the-purpose-of-the-pairs
On 9/08/2012 3:47 PM, cuong nguyen wrote:
Dear Gromacs Users,
I am trying Gromacs/4.5.5-OpenMM on GPU with CUDA support.
when I run grompp-gpu to generate the .tpr file, it worked well:
grompp-gpu -f input_min.mdp -o min.tpr -c box1.g96
however, then I run mdrun-gpu mdrun-gpu -s min -o min -c
On 8/9/12 1:40 PM, Shima Arasteh wrote:
Dear gmx users,
Would be this error (as you see here) a symptom of blowing up of a system? Or
just .mdp options should be changed?
Fatal error:
1 of the 16625 bonded interactions could not be calculated because some atoms
involved moved further apart
On Fri, Jul 6, 2012 at 1:13 AM, Mark Abraham mark.abra...@anu.edu.au wrote:
Possibly not. This might be another instance of the GROMACS team having not
put much effort into the EM code on the theory that it doesn't run for long
enough, so have enough time for developer effort to pay off in
On 6/07/2012 2:46 AM, Elton Carvalho wrote:
Dear gmx-people.
I know that if you send a KILL signal do a mdrun instance running
integrator = md it sets nsteps to the next NS step and exits
gracefully, but I don't see it happening to minimization runs.
Is it possible to send a signal ta
On 7/3/12 5:40 AM, reising...@rostlab.informatik.tu-muenchen.de wrote:
Hi everybody,
I wanted to do a minimization with mdrun but the only output I get is:
3m71_minim.edr
3m71_minim.log
3m71_minim.trr
But no structure file like .pdb i.e.
There was no error in the step before where I prepared
Information messages, such as those shown on the screen during mdrun are
output to stderr. So if you want to get them you should redirect as follows:
mdrun -v -s md.tpr 2 verbose.txt
In the case where you may need to get all output (from both stdout and
stderr) you should use:
mdrun -v -s
On Mon, Jun 18, 2012 at 12:43 PM, Javier Cerezo j...@um.es wrote:
Information messages, such as those shown on the screen during mdrun are
output to stderr. So if you want to get them you should redirect as follows:
mdrun -v -s md.tpr 2 verbose.txt
In the case where you may need to get all
It actually depends on your shell/environment :-P
On sun grid engine and derivatives, the you can have the scheduler capture
the stdout and stderr output through the -o and -e parameters, respectively.
On 2012-06-18 05:28:11PM +0530, Chandan Choudhury wrote:
On Mon, Jun 18, 2012 at 12:43 PM,
Thanks Peter for the clarification.
Chandan
--
Chandan kumar Choudhury
NCL, Pune
INDIA
On Tue, Jun 19, 2012 at 2:27 AM, Peter C. Lai p...@uab.edu wrote:
It actually depends on your shell/environment :-P
On sun grid engine and derivatives, the you can have the scheduler capture
the stdout
Hello all,
I am trying to exclude a nonbonded interactions on the polymer chains using
grompp -f old.mdp -c old_em.gro -p nrexcl_new.top -o new.tpr
and mdrun -rerun command. but when I issue the command above grompp
takes many hours to finish and at the end grompp crashes (Killed).
This even
On 13/04/2012 10:44 AM, Juliette N. wrote:
Hello all,
I am trying to exclude a nonbonded interactions on the polymer chains using
grompp -f old.mdp -c old_em.gro -p nrexcl_new.top -o new.tpr
and mdrun -rerun command. but when I issue the command above grompp
takes many hours to finish and at
Thanks Mark. I have several polymer chains (single polymer type) each
having 362 atoms. So in order to exclude all nonbonded interactions of
a chain with itself I need to add about 362 lines in the top file.
[exclusions]
1 2 3 362
2 3 4 362
3 4 5 ...362
.
358 .. 362
.
.
360 361 362 (this
On 12/04/2012 3:30 PM, priya thiyagarajan wrote:
hello sir,
Thanks for your kind reply..
i am performing final md run for 60molecules ..
after i submitted my job for 5ns, when i analyse the result my run is
completed only for 314ps initially..
At this point, you should have looked at your
On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
-rerun should break the total
Juliette N. wrote:
On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
-rerun
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option.
mdrun -rerun should break the total nonbonded energy coming from
nonboded energy of (different molecules
On 1 April 2012 20:17, Mark Abraham mark.abra...@anu.edu.au wrote:
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun option. mdrun
-rerun should break the total
On 2/04/2012 11:16 AM, Juliette N. wrote:
On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au wrote:
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I have an enquiry regarding calculation of heat of vaporization by
estimating intermolecular nonbonded energies using mdrun rerun
Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?
mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new
On 1 April 2012 20:17, Mark Abrahammark.abra...@anu.edu.au wrote:
On 2/04/2012 10:10 AM, Juliette N. wrote:
Hi all,
I
On 2/04/2012 12:05 PM, Juliette N. wrote:
Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?
mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new
If it even writes one, it will be identical to the -rerun file. There's
no way for
On 1 April 2012 22:07, Mark Abraham mark.abra...@anu.edu.au wrote:
On 2/04/2012 12:05 PM, Juliette N. wrote:
Thanks. One last question. So whats the new trr file provided by -o
flag of mdrun rerun below?
mdrun -rerun total_nonbonded.trr -s new.tpr -o new -c new -g new -e new
If it even
searching every configuration.
Mark
Best,
-- Forwarded message --
From: Juliette N.joojoojo...@gmail.com
Date: 1 April 2012 22:10
Subject: Re: [gmx-users] mdrun -rerun
To: Discussion list for GROMACS usersgmx-users@gromacs.org
On 1 April 2012 22:07, Mark Abrahammark.abra
On Wed, 2012-03-07 at 12:33 +, Lara Bunte wrote:
Hi
After I used
grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
to collect my files into one em.tpr file (which is the meaning of gromp as
fas as I understand it)
Then I start mdrun for energy minimization with the
mdrun looks for topol.tpr by default. specify -s em.tpr in your command
On Wed, Mar 7, 2012 at 8:33 PM, Lara Bunte lara.bu...@yahoo.de wrote:
Hi
After I used
grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr
to collect my files into one em.tpr file (which is the meaning of gromp as
On 8/03/2012 6:09 PM, Siew Wen Leong wrote:
mdrun looks for topol.tpr by default. specify -s em.tpr in your command
Please look up what -deffnm is for. :-)
Mark
On Wed, Mar 7, 2012 at 8:33 PM, Lara Bunte lara.bu...@yahoo.de
mailto:lara.bu...@yahoo.de wrote:
Hi
After I used
general, but figured the former
and was not such a big deal, so never wrote here.
Stephan
Original-Nachricht
Datum: Fri, 02 Mar 2012 17:31:49 +1100
Von: Mark Abraham mark.abra...@anu.edu.au
An: Discussion list for GROMACS users gmx-users@gromacs.org
Betreff: Re: [gmx-users
On 2/03/2012 10:15 AM, bo.shuang wrote:
Hello, all,
I am trying to do REMD simulation. So I used command:
mdrun -s t2T.tpr -multi 2 -replex 1000
And gromacs gives error report:
Fatal error:
mdrun -multi is not supported with the thread library.Please compile
GROMACS with MPI support
For more
On 2/03/2012 3:59 PM, rama david wrote:
Dear GROMACS Friends,
my MD run get crashed , I foollow following command ..
mdrun -s protein_md.tpr -c protein_md.trr -e protein_md.edr -g
protein_md.log -cpi -append -v
the system respond in a way..
Fatal error:
The original run wrote a file
On 1/03/2012 12:31 AM, Steven Neumann wrote:
Dear Gmx Users,
I am trying to use option -p of mdrun for particle decomposition.
I used:
mpiexec mdrun -pd -deffnm nvt
I obtain:
apps/intel/ict/mpi/3.1.038/bin/mpdlib.py:37: DeprecationWarning: the
md5 module is deprecated; use hashlib instead
On Wed, Feb 29, 2012 at 1:47 PM, Mark Abraham mark.abra...@anu.edu.auwrote:
On 1/03/2012 12:31 AM, Steven Neumann wrote:
Dear Gmx Users,
I am trying to use option -p of mdrun for particle decomposition.
I used:
mpiexec mdrun -pd -deffnm nvt
I obtain:
On 28/02/2012 3:50 PM, priya thiyagarajan wrote:
hello sir,
while performing simulation for 30ns, because of queue time limit my
run terminated at 11.6ns.. then i extended my simulation using mdrun
as you suggest..
while doing so i got error as
Fatal error:
Failed to lock: md20.log.
On 26/02/2012 10:42 PM, nicolas prudhomme wrote:
Hi gmx-users,
I don't know why, but my mdrun suddenly started to use the 4.0.7
version while I have installed only the 4.5.4 version.
I have reinstalled gromacs 4.5.4 but when I run mdrun it still want to
use the 4.0.7 version and can not
Hi,
sorry I'm back to this thread after quite a long time, as I was trying
to solve other problems. Now I'm back to the reverse transformation
tutorial on the martini webpage and whenever I try to use the mdp
script provided there for the annealing I just end up with the same
error message, which
francesca vitalini wrote:
Hallo GROMACS users!
I'm trying to run a simple md script after running an energy
minimization script on my system and I'm getting a wired error message
Reading file dynamin_dimer_PR1.tpr, VERSION 3.3.1 (single precision)
Loaded with Money
Actually the directory is of my own and I have created it in my home directory
so that shouldn't be a problem as I also have created other files in the same
directory without any problems so far.
Other ideas?
Thanks
Francesca
Il giorno 31/gen/2012, alle ore 13.02, Justin A. Lemkul ha
Francesca Vitalini wrote:
Actually the directory is of my own and I have created it in my home directory
so that shouldn't be a problem as I also have created other files in the same
directory without any problems so far.
Other ideas?
Thanks
Francesca
Il giorno 31/gen/2012, alle ore
Thank you Justin for your quick reply. Unfortunately I cannot use a more modern
version of GROMACS as my topology and .gro files where first created for a
reverse transformation from cg to fg and thus required the 3.3.1 version and
some specific .itp files that are only present in that
Well I'm keeping struggling with this script. Apparently the problem in in
using the integrator md with the GOMACS 3.3.1 version. In fact the same
.mdp file with integrator steep works. while with md it always gives the
error message that it cannot open the .xtc file.
How can I get around this
francesca vitalini wrote:
Well I'm keeping struggling with this script. Apparently the problem in
in using the integrator md with the GOMACS 3.3.1 version. In fact the
same .mdp file with integrator steep works. while with md it always
gives the error message that it cannot open the .xtc
-users@gromacs.org mailto:gmx-users@gromacs.org
mailto:gmx-users@gromacs.org mailto:gmx-users@gromacs.org__
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see
for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
Reply-To: Discussion list for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script
-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.
Cheers,
--
Szilárd
On Wed, Jan 18, 2012 at 12:27 PM
-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.
Cheers,
--
Szilárd
On Wed
for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.
Cheers
Jan 2012 14:47:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes
from
the shell
To: Discussion list for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users
gmx-users@gromacs.org mailto:gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings
To: Discussion list for GROMACS usersgmx-users@gromacs.org
mailto:gmx-users@gromacs.org**
Reply-To: Discussion list for GROMACS users
gmx-users@gromacs.org mailto:gmx-users@gromacs.org**
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.
Cheers,
--
Szilárd
On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
aiswarya.pa...@gmail.com wrote:
Hi users,
Am running mdrun on gpu . I receive an error such as=
for it!
-Original Message-
From: Szilárd Páll szilard.p...@cbr.su.se
Sender: gmx-users-boun...@gromacs.org
Date: Wed, 18 Jan 2012 14:47:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS users gmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu
: Wed, 18 Jan 2012 14:47:59
To: Discussion list for GROMACS usersgmx-users@gromacs.org
Reply-To: Discussion list for GROMACS usersgmx-users@gromacs.org
Subject: Re: [gmx-users] mdrun-gpu error
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error
On 5/12/2011 9:03 AM, Rongliang Wu wrote:
Dear all,
I have compiled the gromacs 4.5.5 version on BlueGene/P both on the
front end and bgp using the following script :
with the front end as an example
On 25/11/2011 2:28 AM, Vasileios Tatsis wrote:
Dear Gromacs Users,
I am using the -rerun option of mdrun to re-analyze a trajectory.
Thus, I tried to rerun the same trajectory (md.xtc) with exactly the
same md.tpr.
But the bonded interactions are not computed or written to the log
file or to
On Wed, Nov 16, 2011 at 4:11 PM, xianqiang xianqiang...@126.com wrote:
Hi, all
I just restart a simulation with 'mpirun -np 8 mdrun -pd yes -s md_0_1.tpr
-cpi state.cpt -append'
However, the following error appears:
Output file appending has been requested,
but some output files
Bert wrote:
Dear gmx-users,
When I continued a run on my x86_64 linux clusters using the command
mdrun -deffnm prod -cpi prod.cpt -append, I got the errors as below:
Program mdrun, VERSION 4.5.4
Source code file: checkpoint.c, line: 1734
Fatal error:
The original run wrote a file called
H.Ghorbanfekr wrote:
Hi ,
I'm using mdrun-gpu for testing gpubench demos.
But I've got this error message:
Fatal error:
reading tpx file (topol.tpr) version 73 with version 71 program
I installed different version of gromacs 4.5 to 4.5.4 and
gpu-gromacs-beta 1, 2 versions.
But it still
I used the newest version of gromacs 4.5.4 (and not install beta gpu version
) so everything goes well.
thanks for your reply
On Sun, Aug 14, 2011 at 5:52 PM, Justin A. Lemkul jalem...@vt.edu wrote:
H.Ghorbanfekr wrote:
Hi ,
I'm using mdrun-gpu for testing gpubench demos.
But I've got
On 12/08/2011 9:55 PM, Sebastian Breuers wrote:
Dear all,
searching for the mentioned error message I found a bug report for
mdrun. It seemed to be fixed, but in my setup it appears again and I
am not sure if I could do something about it.
I did not attach the tpr file since it is bounced
On Fri, Aug 12, 2011 at 7:55 PM, Sebastian Breuers
breue...@uni-koeln.de wrote:
Dear all,
searching for the mentioned error message I found a bug report for mdrun. It
seemed to be fixed, but in my setup it appears again and I am not sure if I
could do something about it.
I did not attach
Hey,
thank you both for the response. I at least could restart the system.
And it is running beyond the crashing point. Keep the fingers crossed. :)
Kind regards
Sebastian
Am 12.08.2011 15:41, schrieb lina:
On Fri, Aug 12, 2011 at 7:55 PM, Sebastian Breuers
breue...@uni-koeln.de wrote:
hello
Just to share information. My parallel MD run also crash (very rarely) but I
can always bypass the crash point using cpt files.
dawei
On Fri, Aug 12, 2011 at 10:02 AM, Sebastian Breuers
breue...@uni-koeln.dewrote:
Hey,
thank you both for the response. I at least could restart the
Hsin-Lin Chiang wrote:
Hi,
I tried gromacs 4.5.4 in these days and last version I used is 4.0.5.
I found when I add --enable-threads in installation.
I can use mdrun -nc 12 to run 12 CPUs together within one machine.
I assume you mean -nt?
It also amazing me when I type top to check the
On most of my multi-core machines, an attempt is made to detect the
number of threads to start at run-time (there may be a check for the
MAXIMUM number of threads at compile-time, but a developer would need to
chime in to determine if this is the case). For instance, I have a dual
quadcore machine
shivangi nangia wrote:
Dear gmx-users,
I have a cube (8 nm) of the system containing 1:1 :: water: methanol, a
polypeptide, Li ions and 2,5-dihydroxybenzoic acid anions.
I am heating this system with no PBC ( evaporation).
The md.mdp file is:
; Run parameters
integrator = md ;
shivangi nangia wrote:
Hello gmx users,
My system for NVT equilbration runs into segmentation fault as soon as I
try to run it.
It does not give any warning or hint of what might be going wrong.
Since I am a new user I am having difficulty exploring the plausible
reasons.
System:
Hello gmx users,
My system for NVT equilbration runs into segmentation fault as soon as
I try to run it.
It does not give any warning or hint of what might be going wrong.
Since I am a new user I am having difficulty exploring the plausible
reasons.
System: Protein( polyhistidine), 20
On 28/03/2011 7:06 PM, michael zhenin wrote:
Dear all,
I am trying to run dynamics (mdrun) on GROMACS 4.5.3 in parallel on 8
processors, but it crashes after a while and refuses to reach to the end.
The error note that pops out is:
1 particles communicated to PME node 4 are more than 2/3
Warren Gallin wrote:
Hi,
Using GROMACS 4.5.3 I tried to continue an mdrun from a checkpoint, and got an
error that I have never seen before, to whit:
Program mdrun_mpi, VERSION 4.5.3
Source code file:
/global/software/build/gromacs-4.5.3/gromacs/src/gmxlib/checkpoint.c, line: 1727
Fatal
zen...@graduate.org skrev 2011-03-22 10.22:
when i type this order :szrun 2 2 mdrun_mpi -nice 0 -v -s pr.tpr -o
pr.trr -c b4md.pdb -g pr.log -e pr.edr
the process can not run,and there is a problem that t = 0.000
ps: Water molecule starting at atom 90777 can not be settled.
Check for bad
1 - 100 of 252 matches
Mail list logo