[gmx-users] trjconv - two chains are separated

2016-02-02 Thread Yunlong Liu

Hi Gromacs Users,

I used "gmx trjconv" (Gromacs 5.0.4) to remove the pbc of my 
trajectories. My protein has two chains (A and B) and they closely bind 
to each other. after running trjconv with "-pbc nojump", two chains are 
greatly separated by a certain distance. It is mostly likely that the 
pbc is not successfully removed and the program takes a chain from one 
unit cell and another from another unit cell.


Does anybody have any idea to solve the problem?

Best
Yunlong
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Iodide ion LJ parameters

2015-05-28 Thread Yunlong Liu

Hi,

I am trying to simulate a protein with an iodide ion ligand. However, I 
can't find the non-bonded force field parameters for iodide ion. Does 
anybody have any suggestions or have the source of those parameters? 
Thank you.


Best
Yunlong


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] iodine non-bonded parameters for charmm36 force field

2015-05-28 Thread Yunlong Liu

Hi Justin,

I am sorry to miss your email and send to the mail list another one.

I understand that there are no official parameters but is there any 
other source that I can look up?


Thank you.
Yunlong

On 5/27/15 4:37 PM, Justin Lemkul wrote:



On 5/27/15 4:32 PM, Yunlong Liu wrote:

Hi all,

I am trying to build up a model with iodide ion but I don't know 
where I can
obtain those non-bonded parameters for this ion. I am using CHARMM36 
force

field. Is there any suggestions for that. Thank you.



There aren't any official CHARMM parameters for iodide.

-Justin



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] iodine non-bonded parameters for charmm36 force field

2015-05-27 Thread Yunlong Liu

Hi all,

I am trying to build up a model with iodide ion but I don't know where I 
can obtain those non-bonded parameters for this ion. I am using CHARMM36 
force field. Is there any suggestions for that. Thank you.


Best
Yunlong
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Is it possible to generate lipid.itp with Charmm36 ff?

2015-04-07 Thread Yunlong Liu
Hi Justin,

I mean I would like to set it up with Charmm36. The provided one is under OPLS 
and gromos ff.

Yunlong

 On Apr 7, 2015, at 4:20 PM, Justin Lemkul jalem...@vt.edu wrote:
 
 
 
 On 4/7/15 4:18 PM, Yunlong Liu wrote:
 Hi,
 
 I am doing membrane-protein simulation with Charmm36 forcefield. I would 
 like to
 know whether I can generate the lipid force field with Charmm36 by some
 available tools?
 
 No need to generate anything.  We provide the whole force field here:
 
 http://mackerell.umaryland.edu/charmm_ff.shtml#gromacs
 
 -Justin
 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul
 
 ==
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Is it possible to generate lipid.itp with Charmm36 ff?

2015-04-07 Thread Yunlong Liu

Hi,

I am doing membrane-protein simulation with Charmm36 forcefield. I would 
like to know whether I can generate the lipid force field with Charmm36 
by some available tools?


Yunlong
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Is it possible to generate lipid.itp with Charmm36 ff?

2015-04-07 Thread Yunlong Liu
Oh I see. I saw the website this afternoon. Do you refer to the package 
charmm36-nov2014? I checked the file inside but see no dppc or popc entry 
inside. I just want to make sure whether it can work for popc membrane? 

I ask this since I don't quite understand how the lipid membrane force field 
top files should be correctly represented in GROMACS. I read your tutorial and 
see you are talking about lipid.itp. 

Thank you.
Yunlong

 On Apr 7, 2015, at 4:37 PM, Justin Lemkul jalem...@vt.edu wrote:
 
 
 On 4/7/15 4:27 PM, Yunlong Liu wrote:
 Hi Justin,
 
 I mean I would like to set it up with Charmm36. The provided one is under 
 OPLS and gromos ff.
 
 Visit the link I posted.  Our force field files have nothing to do with OPLS 
 or GROMOS.  There is no such thing as lipid.itp for CHARMM36; that commonly 
 refers to the Berger parameters, as distributed by Peter Tieleman's group.
 
 -Justin
 
 Yunlong
 
 On Apr 7, 2015, at 4:20 PM, Justin Lemkul jalem...@vt.edu wrote:
 
 
 
 On 4/7/15 4:18 PM, Yunlong Liu wrote:
 Hi,
 
 I am doing membrane-protein simulation with Charmm36 forcefield. I would 
 like to
 know whether I can generate the lipid force field with Charmm36 by some
 available tools?
 
 No need to generate anything.  We provide the whole force field here:
 
 http://mackerell.umaryland.edu/charmm_ff.shtml#gromacs
 
 -Justin
 
 --
 ==
 
 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul
 
 ==
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
 a mail to gmx-users-requ...@gromacs.org.
 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul
 
 ==
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Performance drops when simulating protein with small ligands

2015-03-20 Thread Yunlong Liu

Hi,

I am running my protein with two ligands. Both ligands are small 
molecules like ATP. However, my simulation performance drops a lot by 
adding this two ligands with the same set of other parameters.


Previously with ligands, I have 30 ns/day with 64-cpus and 4gpus. But 
now I can only gain 17 ns/day with the same setting. I want to know 
whether this is a common phenomenon or I do something stupid.


Yunlong

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance drops when simulating protein with small ligands

2015-03-20 Thread Yunlong Liu
Hi Justin,

I am running this simulation on Stampede/TACC. I don't think there are other 
processes running on the nodes assigned to me. This is a little weird. 

Yunlong
 On Mar 20, 2015, at 2:28 PM, Justin Lemkul jalem...@vt.edu wrote:
 
 
 
 On 3/20/15 1:13 PM, Yunlong Liu wrote:
 Hi,
 
 I am running my protein with two ligands. Both ligands are small molecules 
 like
 ATP. However, my simulation performance drops a lot by adding this two 
 ligands
 with the same set of other parameters.
 
 Previously with ligands, I have 30 ns/day with 64-cpus and 4gpus. But now I 
 can
 only gain 17 ns/day with the same setting. I want to know whether this is a
 common phenomenon or I do something stupid.
 
 Probably some other process is using resources and degrading your 
 performance, or you're using different run settings (the .log file is 
 definitive here).  The mere addition of ligands does not degrade performance.
 
 -Justin
 
 -- 
 ==
 
 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 629
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201
 
 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul
 
 ==
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs API AnalysisData Class

2014-10-11 Thread Yunlong Liu

Hi,

I am developing some personal trajectory analysis code with Gromacs API. 
I have a question on using the AnalysisData class.


I would like to have access to the data stored in the AnalysisData 
object after running a single pass over all the frames in the 
trajectory. I don't know how to do it and where to find the stored data.


For example, I extract a position vector from each frame and use 
setPoint to set it into AnalysisDataHandle in the function 
analysisFrame. After my program run through all the frames, how can I 
access those position vectors stored in my AnalysisData object.


Thank you.

Yunlong
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 答复: membrane protein simulation - errors

2014-09-18 Thread Yunlong Liu
Hi Justin,

I built Gromacs 5.0.1 but it doesn't work even with pdb2gmx. Since my protein 
is patched with -NH3+ and -COO- at the N-terminus and C-terminus. I used -ter 
flag.

But the program cannot recognize the terminus and returns error.

I switched back to the Gromacs 5.0-rc1. It works fine with pdb2gmx and it fixed 
the error but got the previous Non default U-B types errors.

I don't know what is the U-B types and how it is define in the gromacs.

Yunlong

发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Justin Lemkul 
jalem...@vt.edu
发送时间: 2014年9月18日 20:30
收件人: gmx-us...@gromacs.org
主题: Re: [gmx-users] membrane protein simulation - errors

On 9/17/14 6:54 PM, Yunlong Liu wrote:
 Hi,


 I am trying to use Gromacs 5.0 to run membrane protein simulation. I built my 
 membrane protein system with POPC membrane and TIP3P water in VMD. Then I 
 used gmx pdb2gmx to build gromacs topology files.


 My force field is charmm22* and it contains a lipids.rtp entry. It can 
 successfully recognize the POPC molecules in my pdb and the water. But it 
 keeps complaining something like this :


 Processing chain 7 'L' (5360 atoms, 40 residues)
 Warning: Starting residue POPC1 in chain not identified as Protein/RNA/DNA.
 Warning: Starting residue POPC2 in chain not identified as Protein/RNA/DNA.
 Warning: Starting residue POPC3 in chain not identified as Protein/RNA/DNA.
 Warning: Starting residue POPC7 in chain not identified as Protein/RNA/DNA.
 Warning: Starting residue POPC8 in chain not identified as Protein/RNA/DNA.
 More than 5 unidentified residues at start of chain - disabling further 
 warnings.
 Problem with chain definition, or missing terminal residues.
 This chain does not appear to contain a recognized chain molecule.
 If this is incorrect, you can edit residuetypes.dat to modify the behavior.

 Then I modified the residuestypes.dat to add POPC and TIP3 into the file but 
 it still keeps complaining.

 I tried to ignore this and run grompp -f minim.mdp -c my.gro .. to generate 
 the tpr file but I got an error like this:

 ---
 Program gmx, VERSION 5.0-rc1
 Source code file: 
 /home/yunlong/Downloads/gromacs-5.0-rc1/src/gromacs/gmxpreprocess/toppush.c, 
 line: 2393

 Fatal error:
 Invalid Atomnr j: 5, b2-nr: 3

 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---?


 I am wondering whether I can do membrane protein simulation in Gromacs 
 without building the system in Gromacs.


It's certainly possible, but I'd start with using an actual release version of
Gromacs, rather than 5.0-rc1.  Try again with 5.0.1.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] membrane protein simulation - errors

2014-09-17 Thread Yunlong Liu
Hi,


I am trying to use Gromacs 5.0 to run membrane protein simulation. I built my 
membrane protein system with POPC membrane and TIP3P water in VMD. Then I used 
gmx pdb2gmx to build gromacs topology files.


My force field is charmm22* and it contains a lipids.rtp entry. It can 
successfully recognize the POPC molecules in my pdb and the water. But it keeps 
complaining something like this :


Processing chain 7 'L' (5360 atoms, 40 residues)
Warning: Starting residue POPC1 in chain not identified as Protein/RNA/DNA.
Warning: Starting residue POPC2 in chain not identified as Protein/RNA/DNA.
Warning: Starting residue POPC3 in chain not identified as Protein/RNA/DNA.
Warning: Starting residue POPC7 in chain not identified as Protein/RNA/DNA.
Warning: Starting residue POPC8 in chain not identified as Protein/RNA/DNA.
More than 5 unidentified residues at start of chain - disabling further 
warnings.
Problem with chain definition, or missing terminal residues.
This chain does not appear to contain a recognized chain molecule.
If this is incorrect, you can edit residuetypes.dat to modify the behavior.

Then I modified the residuestypes.dat to add POPC and TIP3 into the file but it 
still keeps complaining.

I tried to ignore this and run grompp -f minim.mdp -c my.gro .. to generate the 
tpr file but I got an error like this:

---
Program gmx, VERSION 5.0-rc1
Source code file: 
/home/yunlong/Downloads/gromacs-5.0-rc1/src/gromacs/gmxpreprocess/toppush.c, 
line: 2393

Fatal error:
Invalid Atomnr j: 5, b2-nr: 3

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---?


I am wondering whether I can do membrane protein simulation in Gromacs without 
building the system in Gromacs.

Thanks.
Yunlong


Davis Yunlong Liu

BCMB - First Year PhD Candidate

School of Medicine

The Johns Hopkins University

E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GPU job failed

2014-09-08 Thread Yunlong Liu
Same idea with Szilard.

How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about 
you assigned two GPUs to only one MPI process on one node. If you spread your 
two MPI ranks on two nodes, that means you only have one at each. Then you 
can't assign two GPU for only one MPI rank.

How many GPU do you have on one node? If there are two, you can either launch 
two PPMPI processes on one node and assign two GPU for them. If you only want 
to launch one MPI rank on each node, you can assign only one GPU for each node 
( by -gpu_id 0 )

Yunlong

Try to run
Sent from my iPhone

 On Sep 8, 2014, at 5:35 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 
 Hi,
 
 It looks like you're starting two ranks and passing two GPU IDs so it
 should work. The only think I can think of is that you are either
 getting the two MPI ranks placed on different nodes or that for some
 reason mpirun -np 2 is only starting one rank (MPI installation
 broken?).
 
 Does the same setup work with thread-MPI?
 
 Cheers,
 --
 Szilárd
 
 
 On Mon, Sep 8, 2014 at 2:50 PM, Albert mailmd2...@gmail.com wrote:
 Hello:
 
 I am trying to use the following command in Gromacs-5.0.1:
 
 mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g npt2.log
 -gpu_id 01 -ntomp 10
 
 
 but it always failed with messages:
 
 
 2 GPUs detected on host cudaB:
  #0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
  #1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC:  no, stat:
 compatible
 
 2 GPUs user-selected for this run.
 Mapping of GPUs to the 1 PP rank in this node: #0, #1
 
 
 ---
 Program mdrun_mpi, VERSION 5.0.1
 Source code file:
 /soft2/plumed-2.2/gromacs-5.0.1/src/gromacs/gmxlib/gmx_detect_hardware.c,
 line: 359
 
 Fatal error:
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 1 PP MPI process per node, but you provided 2
 GPUs.
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 
 
 
 However, this command works fine in Gromacs-4.6.5, and I don't know why it
 failed in 5.0.1. Does anybody have any idea?
 
 thx a lot
 
 Albert
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] [gmx-developers] About dynamics loading balance

2014-08-24 Thread Yunlong Liu
Hi Szilard,

I would like to send you the log file and i really need your help. Please trust 
me that i have tested many times when i turned on the dlb, the gpu nodes 
reported cannot allocate memory error and shut all MPI processes down. I have 
to tolerate the large loading imbalance (50%) to run my simulations. I wish i 
can figure out some way that makes my simulation run on GPU and have better 
performance.

Where can i post the log file? If i paste it here, it will be really long.

Yunlong


 On Aug 24, 2014, at 2:20 PM, Szilárd Páll pall.szil...@gmail.com wrote:
 
 On Thu, Aug 21, 2014 at 8:25 PM, Yunlong Liu yliu...@jh.edu wrote:
 Hi Roland,
 
 I just compiled the latest gromacs-5.0 version released on Jun 29th. I will
 recompile it as you suggested by using those Flags. It seems like the high
 loading imbalance doesn't affect the performance as well, which is weird.
 
 How did you draw that conclusion? Please show us log files of the
 respective runs, that will help to assess what is gong on.
 
 --
 Szilárd
 
 Thank you.
 Yunlong
 
 On 8/21/14, 2:13 PM, Roland Schulz wrote:
 
 Hi,
 
 
 
 On Thu, Aug 21, 2014 at 1:56 PM, Yunlong Liu yliu...@jh.edu
 mailto:yliu...@jh.edu wrote:
 
Hi Roland,
 
The problem I am posting is caused by trivial errors (like not
enough memory) and I think it should be a real bug inside the
gromacs-GPU support code.
 
 It is unlikely a trivial error because otherwise someone else would have
 noticed. You could try the release-5-0 branch from git, but I'm not aware of
 any bugfixes related to memory allocation.
 The memory allocation which causes the error isn't the problem. The
 printed size is reasonable. You could recompile with PRINT_ALLOC_KB (add
 -DPRINT_ALLOC_KB to CMAKE_C_FLAGS) and rerun the simulation. It might tell
 you where the usual large memory allocation happens.
 
 PS: Please don't reply to an individual Gromacs developer. Keep all
 conversation on the gmx-users list.
 
 Roland
 
That is the reason why I post this problem to the developer
mailing-list.
 
My system contains ~240,000 atoms. It is a rather big protein. The
memory information of the node is :
 
top - 12:46:59 up 15 days, 22:18, 1 user,  load average: 1.13,
6.27, 11.28
Tasks: 510 total,   2 running, 508 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.3%us,  0.0%sy,  0.0%ni, 93.7%id,  0.0%wa, 0.0%hi,
 0.0%si,  0.0%st
Mem:  32815324k total,  4983916k used, 27831408k free, 7984k
buffers
Swap:  4194296k total,0k used,  4194296k free,   700588k
cached
 
I am running the simulation on 2 nodes, 4 MPI ranks and each rank
with 8 OPENMP-threads. I list the information of their CPU and GPU
here:
 
c442-702.stampede(1)$ nvidia-smi
Thu Aug 21 12:46:17 2014
+--+
| NVIDIA-SMI 331.67 Driver Version: 331.67 |
 
 |---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util
 Compute M. |
 
 |===+==+==|
|   0  Tesla K20m  Off  | :03:00.0 Off
|0 |
| N/A   22CP046W / 225W |172MiB /  4799MiB | 0%
 Default |
 
 +---+--+--+
 
 
 +-+
| Compute processes: GPU Memory |
|  GPU   PID  Process name
 Usage  |
 
 |=|
|0113588 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
|0113589 /work/03002/yliu120/gromacs-5/bin/mdrun_mpi 77MiB |
 
 +-+
 
c442-702.stampede(4)$ lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):16
On-line CPU(s) list:   0-15
Thread(s) per core:1
Core(s) per socket:8
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 45
Stepping:  7
CPU MHz:   2701.000
BogoMIPS:  5399.22
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  20480K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
 
I hope this information will help. Thank you.
 
Yunlong
 
 
 
 
 
 
On 8/21/14, 1:38 PM, Roland Schulz wrote:
 
Hi,
 
please don't use gmx-developers for user questions. Feel free to
use it if you want to fix the problem, and have

[gmx-users] Too much PME mesh wall time.

2014-08-23 Thread Yunlong Liu

Hi gromacs users,

I met a problem with too much PME Mesh time in my simulation. The 
following is my time accounting. I am running my simulation on 2 nodes. 
Each of them has 16 CPUs and 1 Tesla K20m Nvidia GPU.


And my mdrun command is ibrun 
/work/03002/yliu120/gromacs-5/bin/mdrun_mpi -pin on -ntomp 8 -dlb no 
-deffnm pi3k-wt-charm-4 -gpu_id 00.


I manually turned off dlb since when it is turned on, the simulation 
will crash. I have reported it to both mailing lists and talked to Roland.


 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 4 MPI ranks, each using 8 OpenMP threads

 Computing:  Num   Num  CallWall time Giga-Cycles
 Ranks Threads  Count  (s) total sum%
-
 Domain decomp. 48 151592.099 137554.334   2.2
 DD comm. load  48751   0.057 4.947   0.0
 Neighbor search48 150001 665.072 57460.919   0.9
 Launch GPU ops.48   1502 967.023 83548.916   1.3
 Comm. coord.   487352488.263 214981.185   3.5
 Force  487517037.401 608018.042   9.8
 Wait + Comm. F 487513931.222 339650.132   5.5
* PME mesh   48 751   40799.9373525036.971  56.7*
 Wait GPU nonlocal  487511985.151 171513.300   2.8
 Wait GPU local 48751  68.365 5906.612   0.1
 NB X/F buffer ops. 48   29721229.406 106218.328   1.7
 Write traj.48830  28.245 2440.304   0.0
 Update 487512479.611 214233.669   3.4
 Constraints487517041.030 608331.635   9.8
 Comm. energies 48 150001  14.250 1231.154   0.0
 Rest1601.588 138374.139   2.2
-
 Total  71928.719 6214504.588 100.0
-
 Breakdown of PME mesh computation
-
 PME redist. X/F48   15028362.454 722500.151  11.6
 PME spread/gather  48   1502   14836.350 1281832.463  20.6
 PME 3D-FFT 48   15028985.776 776353.949  12.5
 PME 3D-FFT Comm.   48   15027547.935 652127.220  10.5
 PME solve Elec 487511025.249 88579.550   1.4
-

First, I would like to know whether this is a big problem and second, I 
want to know how to improve my performance?
Does it mean that my GPU is running too fast and CPU is waiting. BTW, 
what does the wait GPU nonlocal refer to?


Thank you.
Yunlong

--


Yunlong Liu, PhD Candidate
Computational Biology and Biophysics
Department of Biophysics and Biophysical Chemistry
School of Medicine, The Johns Hopkins University
Email: yliu...@jhmi.edu
Address: 725 N Wolfe St, WBSB RM 601, 21205


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Recommendations on how to increase performance

2014-08-18 Thread Yunlong Liu
Hi Gromacs Users,


I got an time accounting table from a single run. Can anyone give me any advice 
on how to further increase performance?


 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 16 MPI ranks, each using 8 OpenMP threads

 Computing:  Num   Num  CallWall time Giga-Cycles
 Nodes Threads  Count  (s) total sum%
-
 Domain decomp.168  10001  42.603  14723.278   3.3
 DD comm. load 168   2001   0.034 11.807   0.0
 Neighbor search   168  10001  50.194  17346.533   3.9
 Comm. coord.  168 19  33.143  11454.105   2.6
 Force 168 21 530.787 183435.628  41.7
 Wait + Comm. F168 21  30.357  10491.253   2.4
 PME mesh  168 21 491.803 169963.077  38.7
 NB X/F buffer ops.168 580001  12.216   4221.874   1.0
 Write traj.   168  4   0.081 28.145   0.0
 Update168 21   5.612   1939.518   0.4
 Constraints   168 21  67.633  23373.282   5.3
 Comm. energies168  20001   2.636910.895   0.2
 Rest   4.649   1606.573   0.4
-
 Total   1271.748 439505.968 100.0
-
 Breakdown of PME mesh computation
-
 PME redist. X/F   168 42 135.738  46909.937  10.7
 PME spread/gather 168 42 119.991  41467.827   9.4
 PME 3D-FFT168 42  59.470  20552.340   4.7
 PME 3D-FFT Comm.  168 84 170.363  58876.121  13.4
 PME solve Elec168 21   5.465   1888.524   0.4
-


Thank you.

Yunlong



Davis Yunlong Liu

BCMB - First Year PhD Candidate

School of Medicine

The Johns Hopkins University

E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 答复: Questions on reducing large loading imbalance

2014-08-18 Thread Yunlong Liu
Thank you, Szilard.

I will try -dlb yes to see whether the situation gets better. 

Yunlong


发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Szilárd Páll 
pall.szil...@gmail.com
发送时间: 2014年8月19日 2:10
收件人: Discussion list for GROMACS users
主题: Re: [gmx-users] Questions on reducing large loading imbalance

13% is not that large and as far as I can tell the dynamic load
balancing has not even kicked in (the above message would show the
min/average cell volume due to the domain rescaling).

You can try manually turning on load balancing with -dlb yes.
--
Szilárd


On Mon, Aug 18, 2014 at 7:14 PM, Yunlong Liu yliu...@jhmi.edu wrote:
 Hi Gromacs Users,


 I have experienced a problem of large loading imbalance for a long time.

 I usually got a log file like this:


 DD  step 9 load imb.: force 13.2%

Step   Time Lambda
  10  200.00.0

Energies (kJ/mol)
 U-BProper Dih.  Improper Dih.  CMAP Dih.  LJ-14
 5.17644e+041.49735e+043.26712e+03   -1.31906e+032.01143e+04
  Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
 2.55757e+054.04666e+05   -3.15240e+04   -3.74303e+062.30293e+04
  Position Rest.  PotentialKinetic En.   Total Energy  Conserved En.
 7.72298e+03   -2.99458e+066.15259e+05   -2.37932e+06   -3.10501e+06
 Temperature Pres. DC (bar) Pressure (bar)   Constr. rmsd
 3.09333e+02   -2.16110e+02   -2.79932e+013.19468e-05


 I don't know how to reduce the large loading imbalance. Can anyone please 
 give me any advices on that? Thank you.


 Yunlong
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 答复: Can't allocate memory problem

2014-07-18 Thread Yunlong Liu
Hi,

Thank you for your reply.
I am actually not doing anything unusual, just common MD simulation of a 
protein. My system contains ~25 atoms, more or less depend on how many 
water molecules I put in it.

The way I called mdrun is 
ibrun mdrun_mpi_gpu -pin on -ntomp 8 -deffnm pi3k-wt-1 -gpu_id 00

I pinned 8 threads on 1 MPI task (this is the optimal way to run simulation on 
Stampede). It has been problem with other systems like lysosome. But my system 
is a little unusual and I don't really understand where is unusual. 

The systems are doing fine if I use CPU to run the simulation but as soon as I 
turned on the GPU, the simulation sucks frequently. One of the guesses is that 
GPU is more sensitvie in dealing with the non-bonded interactions.

Thank you.
Yunlong


发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Mark Abraham 
mark.j.abra...@gmail.com
发送时间: 2014年7月18日 23:52
收件人: Discussion list for GROMACS users
主题: Re: [gmx-users] Can't allocate memory problem

Hi,

That's highly unusual, and suggests you are doing something highly unusual,
like trying to run on huge numbers of threads, or very large numbers of
bonded interactions. How are you setting up to call mdrun, and what is in
your tpr?

Mark
On Jul 17, 2014 10:13 PM, Yunlong Liu yliu...@jhmi.edu wrote:

 Hi,


 I am currently experiencing a Can't allocate memory problem on Gromacs
 4.6.5 with GPU acceleration.

 Actually, I am running my simulations on Stampede/TACC supercomputers with
 their GPU queue. My first experience is when the simulation length longer
 than 10 ns, the system starts to throw out the Can't allocate memory
 problem as follows:


 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xa912a010
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 These Gromacs Guys Really Rock (P.J. Meulenhoff)
 : Cannot allocate memory
 Error on node 0, will try to stop all the nodes
 Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4

 ---
 Program mdrun_mpi_gpu, VERSION 4.6.5
 Source code file:
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c,
 line: 241

 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xaa516e90
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 Recently, this error occurs even I run a short NVT equilibrium. This
 problem also exists when I use Gromacs 5.0 with GPU acceleration. I looked
 up the Gromacs errors website to check the reasons for this. But it seems
 that none of those reasons will fit in this situation. I use a very good
 computer, the Stampede and I run short simulations. And I know gromacs use
 nanometers as unit. I tried all the solutions that I can figure out but the
 problem becomes more severe.

 Is there anybody that has an idea on solving this issue?

 Thank you.

 Yunlong








 Davis Yunlong Liu

 BCMB - Second Year PhD Candidate

 School of Medicine

 The Johns Hopkins University

 E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 答复: Can't allocate memory problem

2014-07-18 Thread Yunlong Liu
)
   -3.01747e+066.08815e+05   -2.40865e+063.09700e+02   -2.17744e+02
 Pressure (bar)   Constr. rmsd
   -1.15302e+013.22109e-05

DD  step 18 load imb.: force 13.4%

   Step   Time Lambda
 19  380.00.0

   Energies (kJ/mol)
U-BProper Dih.  Improper Dih.  CMAP Dih.  LJ-14
5.07811e+042.99541e+042.98628e+03   -7.39091e+031.97456e+04
 Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
2.01761e+054.03922e+05   -3.13225e+04   -3.71005e+062.19542e+04
  PotentialKinetic En.   Total EnergyTemperature Pres. DC (bar)
   -3.01766e+066.10133e+05   -2.40753e+063.10370e+02   -2.18064e+02
 Pressure (bar)   Constr. rmsd
   -2.96181e+013.28160e-05

If you want to see the full log file, please give me an email address that I 
could send it to.
Thank you.
Yunlong


发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Mark Abraham 
mark.j.abra...@gmail.com
发送时间: 2014年7月18日 23:52
收件人: Discussion list for GROMACS users
主题: Re: [gmx-users] Can't allocate memory problem

Hi,

That's highly unusual, and suggests you are doing something highly unusual,
like trying to run on huge numbers of threads, or very large numbers of
bonded interactions. How are you setting up to call mdrun, and what is in
your tpr?

Mark
On Jul 17, 2014 10:13 PM, Yunlong Liu yliu...@jhmi.edu wrote:

 Hi,


 I am currently experiencing a Can't allocate memory problem on Gromacs
 4.6.5 with GPU acceleration.

 Actually, I am running my simulations on Stampede/TACC supercomputers with
 their GPU queue. My first experience is when the simulation length longer
 than 10 ns, the system starts to throw out the Can't allocate memory
 problem as follows:


 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xa912a010
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 These Gromacs Guys Really Rock (P.J. Meulenhoff)
 : Cannot allocate memory
 Error on node 0, will try to stop all the nodes
 Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4

 ---
 Program mdrun_mpi_gpu, VERSION 4.6.5
 Source code file:
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c,
 line: 241

 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xaa516e90
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 Recently, this error occurs even I run a short NVT equilibrium. This
 problem also exists when I use Gromacs 5.0 with GPU acceleration. I looked
 up the Gromacs errors website to check the reasons for this. But it seems
 that none of those reasons will fit in this situation. I use a very good
 computer, the Stampede and I run short simulations. And I know gromacs use
 nanometers as unit. I tried all the solutions that I can figure out but the
 problem becomes more severe.

 Is there anybody that has an idea on solving this issue?

 Thank you.

 Yunlong








 Davis Yunlong Liu

 BCMB - Second Year PhD Candidate

 School of Medicine

 The Johns Hopkins University

 E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] 答复: 答复: Can't allocate memory problem

2014-07-18 Thread Yunlong Liu
Hi Szilard,

Thank you for your comments.
I really learn a lot from that. Can you please explain more on the -nb gpu_cpu 
tag?
And what I know is the Stampede node contains 16 Intel Xeon cores with only one 
Tesla K20m GPU. But you mention there are two CPU XEON cores. I am a little 
confused on this.

Yunlong


发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Szilárd Páll 
pall.szil...@gmail.com
发送时间: 2014年7月19日 2:41
收件人: Discussion list for GROMACS users
主题: Re: [gmx-users] 答复: Can't allocate memory problem

On Fri, Jul 18, 2014 at 7:31 PM, Yunlong Liu yliu...@jhmi.edu wrote:
 Hi,

 Thank you for your reply.
 I am actually not doing anything unusual, just common MD simulation of a 
 protein. My system contains ~25 atoms, more or less depend on how many 
 water molecules I put in it.

 The way I called mdrun is
 ibrun mdrun_mpi_gpu -pin on -ntomp 8 -deffnm pi3k-wt-1 -gpu_id 00

 I pinned 8 threads on 1 MPI task (this is the optimal way to run simulation 
 on Stampede).

FYI: That can't be universally true. The best run configuration will
always depend on at least the machine characteristics  parallelization
capabilities and behavior of the software/algorithms used as well as
often the  setting/size of input too (especially as different type of
runs may use different algorithms).

More concretely, GROMACS will not always perform best with 8
threads/rank - even though that's the number of cores/socket on
Stampede. My guess is that you'll be better off with 2-4 threads per
rank.

One thing you may have noticed is that a single K20 that Stampede's
visualization nodes seem to have (based on http://goo.gl/9fG7Vd), will
probably not be enough to keep up with two Xeon E5 2680-s and a
considerable amount of runtime will be lost as the CPU will be idling
while waiting for the GPU to complete the non-bonded calculation. You
may want to give a try to the -nb gpu_cpu option.

Cheers,
--
Szilárd

 It has been problem with other systems like lysosome. But my system is a 
 little unusual and I don't really understand where is unusual.

 The systems are doing fine if I use CPU to run the simulation but as soon as 
 I turned on the GPU, the simulation sucks frequently. One of the guesses is 
 that GPU is more sensitvie in dealing with the non-bonded interactions.

 Thank you.
 Yunlong

 
 发件人: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 gromacs.org_gmx-users-boun...@maillist.sys.kth.se 代表 Mark Abraham 
 mark.j.abra...@gmail.com
 发送时间: 2014年7月18日 23:52
 收件人: Discussion list for GROMACS users
 主题: Re: [gmx-users] Can't allocate memory problem

 Hi,

 That's highly unusual, and suggests you are doing something highly unusual,
 like trying to run on huge numbers of threads, or very large numbers of
 bonded interactions. How are you setting up to call mdrun, and what is in
 your tpr?

 Mark
 On Jul 17, 2014 10:13 PM, Yunlong Liu yliu...@jhmi.edu wrote:

 Hi,


 I am currently experiencing a Can't allocate memory problem on Gromacs
 4.6.5 with GPU acceleration.

 Actually, I am running my simulations on Stampede/TACC supercomputers with
 their GPU queue. My first experience is when the simulation length longer
 than 10 ns, the system starts to throw out the Can't allocate memory
 problem as follows:


 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xa912a010
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 These Gromacs Guys Really Rock (P.J. Meulenhoff)
 : Cannot allocate memory
 Error on node 0, will try to stop all the nodes
 Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4

 ---
 Program mdrun_mpi_gpu, VERSION 4.6.5
 Source code file:
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c,
 line: 241

 Fatal error:
 Not enough memory. Failed to realloc 1403808 bytes for f_t-f,
 f_t-f=0xaa516e90
 (called from file
 /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
 line 3840)
 For more information and tips for troubleshooting, please check the GROMACS
 website at http://www.gromacs.org/Documentation/Errors
 ---

 Recently, this error occurs even I run a short NVT equilibrium. This
 problem also exists when I use Gromacs 5.0 with GPU acceleration. I looked
 up the Gromacs errors website to check the reasons for this. But it seems
 that none of those reasons will fit in this situation. I use a very good
 computer, the Stampede and I run short simulations. And I know gromacs use
 nanometers as unit. I tried all the solutions that I can

[gmx-users] Can't allocate memory problem

2014-07-17 Thread Yunlong Liu
Hi,


I am currently experiencing a Can't allocate memory problem on Gromacs 4.6.5 
with GPU acceleration.

Actually, I am running my simulations on Stampede/TACC supercomputers with 
their GPU queue. My first experience is when the simulation length longer than 
10 ns, the system starts to throw out the Can't allocate memory problem as 
follows:


Fatal error:
Not enough memory. Failed to realloc 1403808 bytes for f_t-f, f_t-f=0xa912a010
(called from file 
/admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c, 
line 3840)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

These Gromacs Guys Really Rock (P.J. Meulenhoff)
: Cannot allocate memory
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4

---
Program mdrun_mpi_gpu, VERSION 4.6.5
Source code file: 
/admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c, 
line: 241

Fatal error:
Not enough memory. Failed to realloc 1403808 bytes for f_t-f, f_t-f=0xaa516e90
(called from file 
/admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c, 
line 3840)
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Recently, this error occurs even I run a short NVT equilibrium. This problem 
also exists when I use Gromacs 5.0 with GPU acceleration. I looked up the 
Gromacs errors website to check the reasons for this. But it seems that none of 
those reasons will fit in this situation. I use a very good computer, the 
Stampede and I run short simulations. And I know gromacs use nanometers as 
unit. I tried all the solutions that I can figure out but the problem becomes 
more severe.

Is there anybody that has an idea on solving this issue?

Thank you.

Yunlong








Davis Yunlong Liu

BCMB - Second Year PhD Candidate

School of Medicine

The Johns Hopkins University

E-mail: yliu...@jhmi.edumailto:yliu...@jhmi.edu
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.