Dear All,
I have to verify if some hydrophobic residues, during the simulation, conserve
their interactions and make a cross talk for receptor's transactivation.
Is g_mindist a good tool for this purpose? Do you have more suggestions?
Moreover, should I use trjconv for pbc treatment before
Dear gmx users
I am performing a simulation by gromacs. for building the ligand itp file
by Prodrg, I get an error thet boron atom is not supported by this program.
Do you have any propose for solute this problem?
regards
--
Somayeh Alimohammadi
Ph.D Student of Medical Nanotechnology
Shahid
Another option is g_hbond -contact.
On 8 Sep 2014, at 09:25, Ca C. devi...@hotmail.com wrote:
Dear All,
I have to verify if some hydrophobic residues, during the simulation,
conserve their interactions and make a cross talk for receptor's
transactivation.
Is g_mindist a good tool for this
Hello:
I am trying to use the following command in Gromacs-5.0.1:
mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g
npt2.log -gpu_id 01 -ntomp 10
but it always failed with messages:
2 GPUs detected on host cudaB:
#0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no,
Hello:
I am trying to make two groups for my lipids sytem by g_select with
command line:
g_select_mpi -sf select.dat -f em.gro
here is the content of select.dat:
up=z80;
down=z80;
but it failed with messages:
Program: gmx select, VERSION 5.0.1
Source file:
hi
I am running a 5ns simulation using mdrun command. So this will take a day
to complete. So I want to know that how to check the status of simulation
in between the run whether it is going in right direction or not?
thanks
regards
ankit
--
Gromacs Users mailing list
* Please search the
Hi,
On Mon, Sep 8, 2014 at 4:47 PM, Albert mailmd2...@gmail.com wrote:
I am trying to make two groups for my lipids sytem by g_select with
command line:
g_select_mpi -sf select.dat -f em.gro
here is the content of select.dat:
up=z80;
down=z80;
but it failed with messages: ...
Your
Hi everyoneI want to ask one question...In my .mdp file if I use md
intergrator for energy minimisation .then system is fine...but if I use
steep integrator...my system got error of more force on one atomI not
clear why this is happeingcan any body guide me please..
Regards
Make a temporary copy of the files (generally not necessary, but might
help) and observe whatever suits you.
Mark
On Sep 8, 2014 3:50 PM, ankit agrawal aka...@gmail.com wrote:
hi
I am running a 5ns simulation using mdrun command. So this will take a day
to complete. So I want to know that how
Hi...I think the best way is to check log file If I m wrong please do
correct me!!
Regards
Lovika
On 8 Sep 2014 19:21, ankit agrawal aka...@gmail.com wrote:
hi
I am running a 5ns simulation using mdrun command. So this will take a day
to complete. So I want to know that how to check the
The md integrator does MD, not EM...
Mark
On Sep 8, 2014 4:11 PM, Lovika Moudgil lovikamoud...@gmail.com wrote:
Hi everyoneI want to ask one question...In my .mdp file if I use md
intergrator for energy minimisation .then system is fine...but if I use
steep integrator...my system got
Hello Teemu:
thanks a lot for such helpful advices.
It works now. If I would like to select protein and z80, I use
following select.dat file:
up protein and z80;
down protein and z80;
but it failed with messages:
In command-line option -sf
Error in parsing selections from file
I have found my mistake and hopefully this information is useful.
This is caused by pinning of OPENMP threads by MPI. By default, all OPENMP
threads belongs to each MPI rank will run on one core only in our cluster.
I didn't realize this partially because Gromacs's thread MPI (this is
employed
Oo.thanks for guiding Mark !!!
Regards
Lovika
On 8 Sep 2014 19:45, Mark Abraham mark.j.abra...@gmail.com wrote:
The md integrator does MD, not EM...
Mark
On Sep 8, 2014 4:11 PM, Lovika Moudgil lovikamoud...@gmail.com wrote:
Hi everyoneI want to ask one question...In my .mdp file
Now the question is how can we solve the problem in GPU workstaton and
make two GPU work together for one task?
thx
Albert
On 09/08/2014 04:18 PM, Da-Wei Li wrote:
I have found my mistake and hopefully this information is useful.
This is caused by pinning of OPENMP threads by MPI. By
Hello gmx users,
I am currently working on the ion dependent
persistence length calculations of RNA strands.I want to calculate it in
presence of multivalent cations like Al3+ and Co3+. I guess in order to do
that we have to include the specifications of these ions
hi GMX users
i have simulated the protein-ligand complex by gromacs. I've repeated the
simulation twice but i have get very different results. in one of the
simulations ligand separated from protein and stayed in the center of box.
I've checked all of the input files and the steps , but I did
On 2014-09-08 18:28, soumadwip ghosh wrote:
Hello gmx users,
I am currently working on the ion dependent
persistence length calculations of RNA strands.I want to calculate it in
presence of multivalent cations like Al3+ and Co3+. I guess in order to do
that we have to
On 9/8/14 12:36 AM, Lyna Luo wrote:
Hi Justin,
The blank between lines are just from email format. I used only one window to
see if g_wham can readin my data, but I actually have 64 window. Please see the
error message below. Thanks again! -Lyna
GROMACS: gmx wham, VERSION 5.0
On 9/8/14 12:30 PM, Mahboobeh Eslami wrote:
hi GMX users
i have simulated the protein-ligand complex by gromacs. I've repeated the
simulation twice but i have get very different results. in one of the
simulations ligand separated from protein and stayed in the center of box.
I've checked
Hello,
I was trying to run a simulation on Gromacs-4.6.3 which has been compiled
without thread MPI on a BlueGene/Q system. The configurations per node are
as follows:
PowerPC A2, 64-bit, 1.6 GHz, 16 cores SMP, 4 threads per core
For running on 8 nodes I tried:
srun mdrun_mpi -ntomp 64
But,
Helo Yunlong:
thx a lot for the reply.
It works in Gromacs-4.6.5, but it does NOT in Gromacs-5.0.1. I used the
following command:
mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g
npt2.log -gpu_id 01 -ntomp 10
but it always failed with messages:
2 GPUs detected on host
Hello:
I am simulating a protein in lipids bilyaer and I am going to apply 50mV
voltage across the bilayer. I noticed this paper:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0056342
The author did in Gromacs. I noticed that there is a Electric
fieldselectric field
Hi, Albert
I think the error message is very clear. You have one MPI rank per node,
but provide 2 GPUs per node. The gpuid argument is applied on each of the
node.
dawei
On Mon, Sep 8, 2014 at 2:38 PM, Albert mailmd2...@gmail.com wrote:
Helo Yunlong:
thx a lot for the reply.
It works in
HI Dawei:
Yes, it is.
I am running it in a workstation which have 1 CPU (20 cores) plus 2 GPU.
It is not a server. That's why I use additional option:
-ntomp 10
So that each MPI rank can use 10 core CPU.
This works fine in Gromacs-4.6.5, but it doesn't work in 5.0.1
thx
Albertt
On
Hi, Albertt
It is quite strange. Your log file should provide how many MPI ranks and
how many OPENMP threads per rank. Can you check that part to find how many
MPI ranks are there?
best,
dawei
On Mon, Sep 8, 2014 at 3:18 PM, Albert mailmd2...@gmail.com wrote:
HI Dawei:
Yes, it is.
I am
Dear GMX Users
I have a question about PME loading When executing mdrun.
All my MD simulations (DNA-ligand interaction in triclinic box) are computed on
in-house Linux 64-bit Intel Core-i7.
According to gromacs tutorial in Justin web site
Hello,
We are happy to announce the 1.0 release of MDTraj.
MDTraj is a modern, lightweight and efficient software package for
analyzing molecular dynamics trajectories.
It reads and writes trajectory data from a wide variety of formats,
including those used by AMBER, GROMACS,
CHARMM, NAMD and
Hi,
It looks like you're starting two ranks and passing two GPU IDs so it
should work. The only think I can think of is that you are either
getting the two MPI ranks placed on different nodes or that for some
reason mpirun -np 2 is only starting one rank (MPI installation
broken?).
Does the same
Hi,
By default, there will be no separate PME ranks used with less than
AFAIR 12 ranks (i.e. the default with small number of ranks is -npme
0). Without separate PME ranks (and without GPUs) there is no PP-PME
load balance to tweak, so the PME load is not very relevant from
performance
Same idea with Szilard.
How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about
you assigned two GPUs to only one MPI process on one node. If you spread your
two MPI ranks on two nodes, that means you only have one at each. Then you
can't
#
5th International Workshop on
Model-driven Approaches for Simulation Engineering
part of the Symposium on Theory of Modeling and Simulation
(SCS SpringSim 2015)
CALL
Hi,
Generally speaking, in the absence of accelerators, OpenMP as used in
GROMACS 4.6/5.0 is only useful as you get down to around a few hundred
atoms per core (details vary, but since you often can't get fewer than 512
cores of BG/Q the point is often moot there), and only at fairly low OpenMP
thanks a lot for reply both Yunlong and Szilard.
I don't set up PBS system and nodes in the workstation. In the GPU
workstation, it contains 1 CPU with 20 cores, and two GPUs. So it is
similar to 1 nodes with 2 GPUs.
But I don't know why 4.6.5 works, but 5.0.1 doesn't ...
Thx again
Hi,
I have included the link to my dropbox where I have attached my
gromacs topology files. Though I have included the cyclohexane itp file in
the .top file still I am the same error NO SUCH MOLECULETYPE CHX. SO,
Kindly need help in this regard.
Thank you in advance.
35 matches
Mail list logo