Re: [gmx-users] regarding MSD

2014-08-22 Thread Nidhi Katyal
Hello

I have posted the query earlier but havent got any reply and so reposting
it again.

I have read few papers that determine transition temperature from the plot
of average MSD of hydrogen atoms of protein versus temperature. My question
is:
At a particular temperature, we get a linear curve for MSD versus time, is
it reasonable to calculate average MSD over all such time points? Is this
the average that is plotted in papers (or something is missing) ? My doubt
is won't this average depend on the number of time points (due to its
linear nature)?

Actually I am trying to reproduce the results of some published data.
Although I am getting the same transition temperature but the MSD values
are coming different (eg at a particular temperature if I average all the
MSD values, I am getting value of 15000 while reported value is 1.5 - both
values in same unit angstrom square)

Any help is highly appreciated.



On Thu, Aug 21, 2014 at 9:56 PM, Nidhi Katyal nidhikatyal1...@gmail.com
wrote:

 Hello all

 I have read few papers that determine transition temperature from the plot
 of average MSD versus temperature. My question is:
 At a particular temperature, we get a linear curve for MSD versus time, is
 it reasonable to calculate average MSD over all such time points? Is this
 the average that is plotted in papers (or something is missing) ? My doubt
 is won't this average depend on the number of time points (due to its
 linear nature)?
 Any help is highly appreciated.

 Thanks
 Nidhi

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Why we are not using GPU to solve FFT?

2014-08-22 Thread Theodore Si

Hi,

I wonder why we are using cpu instead of gpu to solve FFT? Is it 
possible to use gpu fft library, say cuFFT to make the FFT used in PME 
faster?


BR,
Theo
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] regarding g_hydorder

2014-08-22 Thread Nidhi Katyal
Hello all

I would like to calculate both distance and angle water orientational
order. I have made an index file containing all oxygen atoms of water and
used trjorder as:
g_hydorder -f *.xtc -s *.tpr -n *.ndx -o file1_1 file2_1 -or file1_2 file2_2
How to interpret the results of the output files? Both output files file1_2
and file2_2 contain the same content. Won't one should contain distance and
other angle orientational order parameter values?
Also I expect parameter values to be less than or equal to 1. But the
values in the file looks something like the following (all greater than 1):

 #Legend   #TBlock   #Xbin Ybin Z t  0   0 0 6.5  0 1 6.5  0 2 4.5  0 3 3.5
0 4 6.5  0 5 4.5  0 6 5.5  1 0 6.5  1 1 4.5  1 2 5.5  1 3 3.5  1 4 5.5  1 5
4.5  1 6 5.5  2 0 4.5  .
.


  Please help me in interpreting this file.

Thanks
Nidhi
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Why we are not using GPU to solve FFT?

2014-08-22 Thread Mark Abraham
On Fri, Aug 22, 2014 at 9:10 AM, Theodore Si sjyz...@gmail.com wrote:

 Hi,

 I wonder why we are using cpu instead of gpu to solve FFT?


While people continue to attach GPUs to nodes with tasty x86 cores, we'd
like to use them. Adding more work to the GPU while leaving the CPU idle
does not improve throughput.

Is it possible to use gpu fft library, say cuFFT to make the FFT used in
 PME faster?


Perhaps, on a single node, if someone were to write the code to do it. But
on mutliple nodes on current-generation hardware, a 3DFFT that has to do
data transfer, kernel launch and then go back across PCI to get to
Infiniband to do the all-to-all would be horrendous. The latency of
all-to-all communication already limits scaling without, say, doubling the
length of that latency.

Mark


 BR,
 Theo
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Can we set the number of pure PME nodes when using GPUCPU?

2014-08-22 Thread Theodore Si

Hi Mark,

Could you tell me why that when we are GPU-CPU nodes as PME-dedicated 
nodes, the GPU on such nodes will be idle?


Theo

On 8/11/2014 9:36 PM, Mark Abraham wrote:

Hi,

What Carsten said, if running on nodes that have GPUs.

If running on a mixed setup (some nodes with GPU, some not), then arranging
your MPI environment to place PME ranks on CPU-only nodes is probably
worthwhile. For example, all your PP ranks first, mapped to GPU nodes, then
all your PME ranks, mapped to CPU-only nodes, and then use mdrun -ddorder
pp_pme.

Mark


On Mon, Aug 11, 2014 at 2:45 AM, Theodore Si sjyz...@gmail.com wrote:


Hi Mark,

This is information of our cluster, could you give us some advice as
regards to our cluster so that we can make GMX run faster on our system?

Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M


Device Name Device Type Specifications  Number
CPU NodeIntelH2216JFFKRNodesCPU: 2×Intel Xeon E5-2670(8 Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 332
Fat NodeIntelH2216WPFKRNodesCPU: 2×Intel Xeon E5-2670(8 Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 256G(16×16G) ECC Registered DDR3 1600MHz Samsung Memory20
GPU NodeIntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8 Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 50
MIC NodeIntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8 Cores,
2.6GHz, 20MB Cache, 8.0GT)
Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 5
Computing Network SwitchMellanox Infiniband FDR Core Switch
648× FDR Core Switch MSX6536-10R, Mellanox Unified Fabric Manager   1
Mellanox SX1036 40Gb Switch 36× 40Gb Ethernet Switch SX1036, 36× QSFP
Interface 1
Management Network Switch   Extreme Summit X440-48t-10G 2-layer Switch
48× 1Giga Switch Summit X440-48t-10G, authorized by ExtremeXOS   9
Extreme Summit X650-24X 3-layer Switch  24× 10Giga 3-layer Ethernet Switch
Summit X650-24X, authorized by ExtremeXOS1
Parallel StorageDDN Parallel Storage System DDN SFA12K Storage
System   1
GPU GPU Accelerator NVIDIA Tesla Kepler K20M70
MIC MIC Intel Xeon Phi 5110P Knights Corner 10
40Gb Ethernet Card  MCX314A-BCBTMellanox ConnextX-3 Chip 40Gb
Ethernet Card
2× 40Gb Ethernet ports, enough QSFP cables  16
SSD Intel SSD910Intel SSD910 Disk, 400GB, PCIE  80







On 8/10/2014 5:50 AM, Mark Abraham wrote:


That's not what I said You can set...

-npme behaves the same whether or not GPUs are in use. Using separate
ranks
for PME caters to trying to minimize the cost of the all-to-all
communication of the 3DFFT. That's still relevant when using GPUs, but if
separate PME ranks are used, any GPUs on nodes that only have PME ranks
are
left idle. The most effective approach depends critically on the hardware
and simulation setup, and whether you pay money for your hardware.

Mark


On Sat, Aug 9, 2014 at 2:56 AM, Theodore Si sjyz...@gmail.com wrote:

  Hi,

You mean no matter we use GPU acceleration or not, -npme is just a
reference?
Why we can't set that to a exact value?


On 8/9/2014 5:14 AM, Mark Abraham wrote:

  You can set the number of PME-only ranks with -npme. Whether it's useful

is
another matter :-) The CPU-based PME offload and the GPU-based PP
offload
do not combine very well.

Mark


On Fri, Aug 8, 2014 at 7:24 AM, Theodore Si sjyz...@gmail.com wrote:

   Hi,


Can we set the number manually with -npme when using GPU acceleration?


--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


  --

Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 

[gmx-users] (no subject)

2014-08-22 Thread Balasubramanian Suriyanarayanan
Dear Users,

 generally for running a 10 ns simulation is there any online server
facility available. This will be helpful for people who do not have a
continuous access to the server.

regards
Suriyanarayanan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] ligand is out side the box

2014-08-22 Thread RINU KHATTRI
hello gromacs users
i am working on protein ligand complex with POPC membrane according to
justin (from previous mail) i built the system without ligand and
after minimization and shrink paste the ligand and edit the box size
but after solvation and water addition ligand is out side the box

grompp -f ions.mdp -c system_solv.gro -p topol.top -o ions.tpr

genion -s ions.tpr -o system_solv_ions.gro -p topol.top -pname NA
-nname CL -nn 14

grompp -f minim.mdp -c system_solv_ions.gro -p topol.top -o em.tpr

gmx mdrun -v -deffnm em

you can see the image i have been uploaded
kindly help
http://s48.photobucket.com/user/mittukhattri/media/pic_zpse022cdb3.png.html?filters[user]=140927090filters[recent]=1sort=1o=0
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] pme ranks

2014-08-22 Thread xiexiao...@sjtu.edu.cn
Does anyone know that what the pme ranks mean?



xiexiao...@sjtu.edu.cn
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] pme ranks

2014-08-22 Thread Carsten Kutzner
On 22 Aug 2014, at 12:48, xiexiao...@sjtu.edu.cn wrote:

 Does anyone know that what the pme ranks mean?
See for example [1] in the section
Multiple-Program, Multiple-Data PME Parallelization.

Best,
  Carsten



1.  Hess, B., Kutzner, C., van der Spoel, D.  Lindahl, E. GROMACS 4: 
Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular 
Simulation. J. Chem. Theory Comput. (2008).


 
 
 
 xiexiao...@sjtu.edu.cn
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Can we set the number of pure PME nodes when using GPUCPU?

2014-08-22 Thread Mark Abraham
Hi,

Because no work will be sent to them. The GPU implementation can accelerate
domains from PP ranks on their node, but with an MPMD setup that uses
dedicated PME nodes, there will be no PP ranks on nodes that have been set
up with only PME ranks. The two offload models (PP work - GPU; PME work -
CPU subset) do not work well together, as I said.

One can devise various schemes in 4.6/5.0 that could use those GPUs, but
they either require
* each node does both PME and PP work (thus limiting scaling because of the
all-to-all for PME, and perhaps making poor use of locality on multi-socket
nodes), or
* that all nodes have PP ranks, but only some have PME ranks, and the nodes
map their GPUs to PP ranks in a way that is different depending on whether
PME ranks are present (which could work well, but relies on the DD
load-balancer recognizing and taking advantage of the faster progress of
the PP ranks that have better GPU support, and requires that you get very
dirty hands laying out PP and PME ranks onto hardware that will later match
the requirements of the DD load balancer, and probably that you balance
PP-PME load manually)

I do not recommend the last approach, because of its complexity.

Clearly there are design decisions to improve. Work is underway.

Cheers,

Mark


On Fri, Aug 22, 2014 at 10:11 AM, Theodore Si sjyz...@gmail.com wrote:

 Hi Mark,

 Could you tell me why that when we are GPU-CPU nodes as PME-dedicated
 nodes, the GPU on such nodes will be idle?


 Theo

 On 8/11/2014 9:36 PM, Mark Abraham wrote:

 Hi,

 What Carsten said, if running on nodes that have GPUs.

 If running on a mixed setup (some nodes with GPU, some not), then
 arranging
 your MPI environment to place PME ranks on CPU-only nodes is probably
 worthwhile. For example, all your PP ranks first, mapped to GPU nodes,
 then
 all your PME ranks, mapped to CPU-only nodes, and then use mdrun -ddorder
 pp_pme.

 Mark


 On Mon, Aug 11, 2014 at 2:45 AM, Theodore Si sjyz...@gmail.com wrote:

  Hi Mark,

 This is information of our cluster, could you give us some advice as
 regards to our cluster so that we can make GMX run faster on our system?

 Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M


 Device Name Device Type Specifications  Number
 CPU NodeIntelH2216JFFKRNodesCPU: 2×Intel Xeon E5-2670(8
 Cores,
 2.6GHz, 20MB Cache, 8.0GT)
 Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 332
 Fat NodeIntelH2216WPFKRNodesCPU: 2×Intel Xeon E5-2670(8
 Cores,
 2.6GHz, 20MB Cache, 8.0GT)
 Mem: 256G(16×16G) ECC Registered DDR3 1600MHz Samsung Memory20
 GPU NodeIntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8
 Cores,
 2.6GHz, 20MB Cache, 8.0GT)
 Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 50
 MIC NodeIntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8
 Cores,
 2.6GHz, 20MB Cache, 8.0GT)
 Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 5
 Computing Network SwitchMellanox Infiniband FDR Core Switch
 648× FDR Core Switch MSX6536-10R, Mellanox Unified Fabric Manager   1
 Mellanox SX1036 40Gb Switch 36× 40Gb Ethernet Switch SX1036, 36× QSFP
 Interface 1
 Management Network Switch   Extreme Summit X440-48t-10G 2-layer
 Switch
 48× 1Giga Switch Summit X440-48t-10G, authorized by ExtremeXOS   9
 Extreme Summit X650-24X 3-layer Switch  24× 10Giga 3-layer Ethernet
 Switch
 Summit X650-24X, authorized by ExtremeXOS1
 Parallel StorageDDN Parallel Storage System DDN SFA12K
 Storage
 System   1
 GPU GPU Accelerator NVIDIA Tesla Kepler K20M70
 MIC MIC Intel Xeon Phi 5110P Knights Corner 10
 40Gb Ethernet Card  MCX314A-BCBTMellanox ConnextX-3 Chip 40Gb
 Ethernet Card
 2× 40Gb Ethernet ports, enough QSFP cables  16
 SSD Intel SSD910Intel SSD910 Disk, 400GB, PCIE  80







 On 8/10/2014 5:50 AM, Mark Abraham wrote:

  That's not what I said You can set...

 -npme behaves the same whether or not GPUs are in use. Using separate
 ranks
 for PME caters to trying to minimize the cost of the all-to-all
 communication of the 3DFFT. That's still relevant when using GPUs, but
 if
 separate PME ranks are used, any GPUs on nodes that only have PME ranks
 are
 left idle. The most effective approach depends critically on the
 hardware
 and simulation setup, and whether you pay money for your hardware.

 Mark


 On Sat, Aug 9, 2014 at 2:56 AM, Theodore Si sjyz...@gmail.com wrote:

   Hi,

 You mean no matter we use GPU acceleration or not, -npme is just a
 reference?
 Why we can't set that to a exact value?


 On 8/9/2014 5:14 AM, Mark Abraham wrote:

   You can set the number of PME-only ranks with -npme. Whether it's
 useful

 is
 another matter :-) The CPU-based PME offload and the GPU-based PP
 offload
 do not combine very well.

 Mark


 On Fri, Aug 8, 2014 at 7:24 AM, Theodore Si sjyz...@gmail.com
 wrote:

Hi,

  Can we set the number 

Re: [gmx-users] Extending simulation problem.

2014-08-22 Thread Dawid das
2014-08-21 17:45 GMT+01:00 Mark Abraham mark.j.abra...@gmail.com:

  Well in my *log file nothing about state or *cpi files is mentioned.
 

 Sounds like perhaps your combination of circumstances (-append, no old
 output files provided for appending, maybe not even a checkpoint file
 provided) is leading to mdrun silently doing the only thing it can do,
 which is start again from either the -s or -cpi state. Your description of
 I
 tried both state.cpt and prev_state.cpt and the beginning of new simulation
 looks exactly the same doesn't help fully - same as what? The start of the
 first simulation, or the end of the first simulation?


It is exactly the same as the start of the first simulation. Like I said, I
did not have appropriate files from previous simulation in my scratch
and I rerun it incorrectly I guess.

Thank you for your detailed answer :). I think I will manage now to extend
this simulation properly.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Extending simulation problem.

2014-08-22 Thread Dawid das
Can you just have a quick look at this?
When I run:
tpbconv -s npt-md.tpr -extend 95000 -o npt-md.tpr

to extend simulation by 95 ns this time I got this message at the end:

Extending remaining runtime of by 95000 ps (now 5000 steps)
Writing statusfile with starting step  0 and length   5000
steps...
 time  0.000 and length 10.000 ps


So it looks like new status file is written and it says that simulation
will start from the very beginning or did I misunderstand something?
However when I check state.cpt with gmxdump it looks fine:

step = 250
t = 5000.00

as I expect.

Thank you,

Dawid


2014-08-22 13:15 GMT+01:00 Dawid das add...@googlemail.com:


 2014-08-21 17:45 GMT+01:00 Mark Abraham mark.j.abra...@gmail.com:

  Well in my *log file nothing about state or *cpi files is mentioned.
 

 Sounds like perhaps your combination of circumstances (-append, no old
 output files provided for appending, maybe not even a checkpoint file
 provided) is leading to mdrun silently doing the only thing it can do,
 which is start again from either the -s or -cpi state. Your description
 of I
 tried both state.cpt and prev_state.cpt and the beginning of new
 simulation
 looks exactly the same doesn't help fully - same as what? The start of
 the
 first simulation, or the end of the first simulation?


 It is exactly the same as the start of the first simulation. Like I said,
 I did not have appropriate files from previous simulation in my scratch
 and I rerun it incorrectly I guess.

 Thank you for your detailed answer :). I think I will manage now to extend
 this simulation properly.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Extending simulation problem.

2014-08-22 Thread Justin Lemkul



On 8/22/14, 8:49 AM, Dawid das wrote:

Can you just have a quick look at this?
When I run:
tpbconv -s npt-md.tpr -extend 95000 -o npt-md.tpr

to extend simulation by 95 ns this time I got this message at the end:

Extending remaining runtime of by 95000 ps (now 5000 steps)
Writing statusfile with starting step  0 and length   5000
steps...
  time  0.000 and length 10.000 ps


So it looks like new status file is written and it says that simulation
will start from the very beginning or did I misunderstand something?
However when I check state.cpt with gmxdump it looks fine:

step = 250
t = 5000.00



All of this is expected.  Using tpbconv doesn't change the start time of the 
.tpr file.  It says the new .tpr file has a new number of steps and the 
starting step is determined by the use of a checkpoint file on the mdrun command 
line.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] QM/MM simulation: point-charge, pbc and bOPT=yes problems

2014-08-22 Thread Bence Hégely
Dear Gromacs users!
I'm trying to interface gromacs (5.0) with orca (2.9.1), although i
encountered some problems. I'm testing a tripeptide system which i found
here: http://www.picb.ac.cn/~liwenjin/QMMM_simulations/index.html , but i'm
using a different .mdp file, with these parameters (i copied the only
working configuration):
integrator  = steep
dt  =  0.002; ps !
nsteps  =   1
nstlist =  10
ns_type =  grid
rlist   =  3.0
rcoulomb=  3.0
coulombtype = cut-off
vdwtype = cut-off
rvdw= 3.0
pbc = no
periodic_molecules  =  no
constraints = none
energygrps  = QMatoms MMatoms
cutoff-scheme   = group

; QM/MM calculation stuff
QMMM = yes
QMMM-grps = QMatoms
QMmethod = rhf
QMbasis = 3-21G
QMMMscheme = normal
QMcharge = 0
QMmult = 1
bOPT = no
bTS  = no
SH   = no

;
;   Energy minimizing stuff
;
emtol   =  60   ; minimization thresold (kj/mol.nm-1)1
hartree/bohr= 49614.75241 kj/mol.nm-1  1 kj/mol.nm-1=2.01553e-5 hartree/bohr
emstep  =  0.01  ; minimization step in nm

with the .ORCAINFO file of
! LDA cc-(p)VDZ

%LJcoefficients peptide.LJ

%pointcharges peptide.pc

%geom

end

After i modified the ffnonbonded.itp and atomtypes.atp for the dummy atom,
rebuilt the gromacs and the grompp and mdrun commands were executed with
the following:
grompp -p peptide.top -c peptide.gro -f peptide.mdp -n peptide.ndx -o
peptide.tpr
mdrun -s peptide.tpr -c peptide.gro -o peptide.trr -e peptide.edr -g
peptide.log

The first thing is, that the default neighbor searching algorithm -
cutoff-scheme=verlet - isn't producing any point charges for ORCA and the
job ends with a segmentation fault, although the group cutoff scheme works
well. It is not clear to me why this is happening, but i suspect the
problem lies in that the qm/mm subrutines are quite old (~2003?), and the
verlet scheme introduced much later in gromacs 4.6 (2013).

The second problem that i have encountered is with the pbc options: if i'm
using pbc=xyz the job terminates with a segmentation fault (core dumped)
after orca terminates normally. After i looked for some explanation about
coordinate updating on the qmmm.c source file I saw some comments that the
update_QMMMrec function not working properly without pbc, even though it
works just fine with pbc=no. Little bit confused of that, although the
gromacs was tested on the regression package and all the tests passed.

The third problem is with the bOPT = yes option. When i pass the
optimization to the orca, after the first optimization cycle (before orca
terminates normally)  the tripeptide's qm region forms an unrealistic
geometry. At the input of the second orca run, the qm region scatters, the
atoms move apart around 10 angströms from each other and gives the
following error message:
Error (ORCA_GSTEP): The lambda equations have not converged

although, the job continues and at the third orca run gives the following:
Calling '/export/home/hegely/Programz/orca_2_9_1_linux_x86-64/orca
peptide.inp  peptide.out'


!!!FATAL ERROR ENCOUNTERED   !!!
!!!---   !!!
!!!  I/O OPERATION FAILED!!!

ABORTING THE RUN

and the job runs onward, giving the same error message over and over until
i kill it. In the gromacs .log file no error messages are presented and i
don't really have a clue of what's going on. Maybe the point charges
destroy the geometry of the peptide?

I think i checked all the qm/mm, orca searching results in the gmx-users
list, but didn't find the solutions. Sorry if i missed something out, but i
would appreciate any help!

Bence Hégely
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Interpreting flooding results

2014-08-22 Thread Ricardo O. S. Soares
Hello Carsten, thanks for replying.

Yes, I'm using GMX 4.6.5, so I'm able to see the info in the .xvg file.

Best,

--- 
Biological Chemistry and Physics 
Faculty of Pharmaceutical Sciences at Ribeirão Preto 
University of São Paulo - Brazil 

- Mensagem original -
 De: Carsten Kutzner ckut...@gwdg.de
 Para: gmx-us...@gromacs.org
 Enviadas: Sexta-feira, 22 de Agosto de 2014 2:47:40
 Assunto: Re: [gmx-users] Interteting flooding results
 
 Hi,
 
 On 21 Aug 2014, at 23:27, Ricardo O. S. Soares rsoa...@fcfrp.usp.br
 wrote:
 
  Dear users,
  
  could anyone give me some general guidelines or links to help me
  interpret the essential dynamics/flooding output xvg file from a
  flooding simulation?
 That depends a bit on which Gromacs Version you are using. If you are
 using
 a 4.6 or later version, look in the header of the .xvg file, there
 should be
 a short explanation about what is written to each column of the file.
 What columns are printed depend on what is switched on in your .edi
 file.
 
 Best,
   Carsten
 
 
  I followed Spiwok's tutorial
  (http://web.vscht.cz/~spiwokv/mtdec/index.html) and Langer's paper
  (http://onlinelibrary.wiley.com/doi/10.1002/jcc.20473/full) .
  
  Thanks,
  
  
  
  
  
  ---
  Biological Chemistry and Physics
  Faculty of Pharmaceutical Sciences at Ribeirão Preto
  University of São Paulo - Brazil
  --
  Gromacs Users mailing list
  
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
  
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
  or send a mail to gmx-users-requ...@gromacs.org.
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry
 Theoretical and Computational Biophysics
 Am Fassberg 11, 37077 Goettingen, Germany
 Tel. +49-551-2012313, Fax: +49-551-2012302
 http://www.mpibpc.mpg.de/grubmueller/kutzner
 http://www.mpibpc.mpg.de/grubmueller/sppexa
 
 --
 Gromacs Users mailing list
 
 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Extending simulation problem.

2014-08-22 Thread Mark Abraham
On Aug 22, 2014 2:51 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 8/22/14, 8:49 AM, Dawid das wrote:

 Can you just have a quick look at this?
 When I run:
 tpbconv -s npt-md.tpr -extend 95000 -o npt-md.tpr

 to extend simulation by 95 ns this time I got this message at the end:

 Extending remaining runtime of by 95000 ps (now 5000 steps)
 Writing statusfile with starting step  0 and length   5000
 steps...
   time  0.000 and length 10.000
ps


 So it looks like new status file is written and it says that simulation
 will start from the very beginning or did I misunderstand something?
 However when I check state.cpt with gmxdump it looks fine:

 step = 250
 t = 5000.00


 All of this is expected.  Using tpbconv doesn't change the start time of
the .tpr file.  It says the new .tpr file has a new number of steps and
the starting step is determined by the use of a checkpoint file on the
mdrun command line.

... And falling back on the contents of the .tpr to start at the designated
start time, if there's no checkpoint provided.

Mark


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] QM/MM simulation: point-charge, pbc and bOPT=yes problems

2014-08-22 Thread Mark Abraham
Hi,

Unfortunately the QM/MM interface is completely unmaintained, so the fact
that it doesn't set fire to your chair is something of a bonus! If you can
get the group scheme to do something useful, that's great, but I would
suggest you build an older version of GROMACS.

In practice, with the impending death of the group scheme, QM/MM support
might disappear entirely, unless someone steps up to do the work!

Mark
On Aug 22, 2014 3:23 PM, Bence Hégely hoemb...@gmail.com wrote:

 Dear Gromacs users!
 I'm trying to interface gromacs (5.0) with orca (2.9.1), although i
 encountered some problems. I'm testing a tripeptide system which i found
 here: http://www.picb.ac.cn/~liwenjin/QMMM_simulations/index.html , but
 i'm
 using a different .mdp file, with these parameters (i copied the only
 working configuration):
 integrator  = steep
 dt  =  0.002; ps !
 nsteps  =   1
 nstlist =  10
 ns_type =  grid
 rlist   =  3.0
 rcoulomb=  3.0
 coulombtype = cut-off
 vdwtype = cut-off
 rvdw= 3.0
 pbc = no
 periodic_molecules  =  no
 constraints = none
 energygrps  = QMatoms MMatoms
 cutoff-scheme   = group

 ; QM/MM calculation stuff
 QMMM = yes
 QMMM-grps = QMatoms
 QMmethod = rhf
 QMbasis = 3-21G
 QMMMscheme = normal
 QMcharge = 0
 QMmult = 1
 bOPT = no
 bTS  = no
 SH   = no

 ;
 ;   Energy minimizing stuff
 ;
 emtol   =  60   ; minimization thresold (kj/mol.nm-1)1
 hartree/bohr= 49614.75241 kj/mol.nm-1  1 kj/mol.nm-1=2.01553e-5
 hartree/bohr
 emstep  =  0.01  ; minimization step in nm

 with the .ORCAINFO file of
 ! LDA cc-(p)VDZ

 %LJcoefficients peptide.LJ

 %pointcharges peptide.pc

 %geom

 end

 After i modified the ffnonbonded.itp and atomtypes.atp for the dummy atom,
 rebuilt the gromacs and the grompp and mdrun commands were executed with
 the following:
 grompp -p peptide.top -c peptide.gro -f peptide.mdp -n peptide.ndx -o
 peptide.tpr
 mdrun -s peptide.tpr -c peptide.gro -o peptide.trr -e peptide.edr -g
 peptide.log

 The first thing is, that the default neighbor searching algorithm -
 cutoff-scheme=verlet - isn't producing any point charges for ORCA and the
 job ends with a segmentation fault, although the group cutoff scheme works
 well. It is not clear to me why this is happening, but i suspect the
 problem lies in that the qm/mm subrutines are quite old (~2003?), and the
 verlet scheme introduced much later in gromacs 4.6 (2013).

 The second problem that i have encountered is with the pbc options: if i'm
 using pbc=xyz the job terminates with a segmentation fault (core dumped)
 after orca terminates normally. After i looked for some explanation about
 coordinate updating on the qmmm.c source file I saw some comments that the
 update_QMMMrec function not working properly without pbc, even though it
 works just fine with pbc=no. Little bit confused of that, although the
 gromacs was tested on the regression package and all the tests passed.

 The third problem is with the bOPT = yes option. When i pass the
 optimization to the orca, after the first optimization cycle (before orca
 terminates normally)  the tripeptide's qm region forms an unrealistic
 geometry. At the input of the second orca run, the qm region scatters, the
 atoms move apart around 10 angströms from each other and gives the
 following error message:
 Error (ORCA_GSTEP): The lambda equations have not converged

 although, the job continues and at the third orca run gives the following:
 Calling '/export/home/hegely/Programz/orca_2_9_1_linux_x86-64/orca
 peptide.inp  peptide.out'

 
 !!!FATAL ERROR ENCOUNTERED   !!!
 !!!---   !!!
 !!!  I/O OPERATION FAILED!!!
 
 ABORTING THE RUN

 and the job runs onward, giving the same error message over and over until
 i kill it. In the gromacs .log file no error messages are presented and i
 don't really have a clue of what's going on. Maybe the point charges
 destroy the geometry of the peptide?

 I think i checked all the qm/mm, orca searching results in the gmx-users
 list, but didn't find the solutions. Sorry if i missed something out, but i
 would appreciate any help!

 Bence Hégely
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please 

Re: [gmx-users] Numerical Stability of Gromacs Implementation

2014-08-22 Thread Mark Abraham
On Aug 22, 2014 4:49 PM, Johnny Lu johnny.lu...@gmail.com wrote:

 it has many (relatively) power7 cores and aix 6.1.0.0 with no gpu, and I
 compiled single precision gromacs 4.6.6 with open mp on it.
 I tried xlc or mpcc for 30+ times.

6.1 was released 7 years ago, but if you have a functional xlc, then you
should be able to build GROMACS. It won't run fast, though.

The gcc on it doesn't support openmp,
 and I compiled another one with openmp, and also gnu cmake.
 Linker is aix ld, with -lmass, and -lm flags. I am not sure if the mass
 library helped in compilation of fftw3 library.

It doesn't.

 Somehow, gcc can't find affinity support on aix ld, so I specified that in
 the job script of llqueue.
 May be next time I would try the mass simd library(-lmass_simdp7) or
vector
 library (-lmassvp4)
 (http://www-01.ibm.com/support/docview.wss?uid=swg27005375)

 Somehow, 24 intel xeon cpu (compiled with intel compiler, and mkl library)
 seems 4 times faster than 32 power7, when I run gromacs 4.6.6.

Not surprising, that Intel hardware has been the target of a lot of
optimization.

Mark

 On Thu, Aug 21, 2014 at 7:28 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  On Thu, Aug 21, 2014 at 7:59 PM, Johnny Lu johnny.lu...@gmail.com
wrote:
 
   Sorry for asking this. Is it possible for me to get some references
about
   the claims of instability of gromacs simulations, and their debunk?
  
 
  Not really. You can find a paper from some of the Desmond authors that
  correctly observes some issues in GROMACS 3.x. People occasionally
refer to
  it here as if it is current news. These issues are long fixed, but not
  worth writing about - journal articles should be about delivering
quality
  science. More commonly, an issue would be handled via private email,
though
  even these are rare. Overall, the biomolecular MD community is quite
good
  at finding problems with their own and each other's algorithms and
  implementations and getting them fixed constructively. People saying
  someone said x was bad but gave me no details need to talk to the
  someone, not just the authors of x ;-) If people start being evasive or
  secretive about possible problems with their code... be concerned.
 
  From the few papers that I read, I guess algorithms of molecular
dynamics
   do not treat all observables equally well.
   Some old papers say that the velocity in velocity verlet is not
  symplectic,
   but rather follows some shadow hamiltonian or generalized
equipartition
 
  theorem.
  
 
  This is common to all methods with a finite time step - see
  http://en.wikipedia.org/wiki/Energy_drift. There are certainly a variety
  of
  ways of estimating the velocity with common integrators, and they have
  different quality attributes. You can read about how GROMACS handles
this
  in the manual.
 
  Then another one mentioned force splitting can reduce the resonance
effect
   caused by integrator.
   That said, very few papers talk about this.
  
 
  There are lots of papers that discuss details of multiple time-stepping
  algorithms that seek to deal with this issue directly.
 
   I don't know much about the effect of MD on the observables that I try
to
   look at.
  
 
  It's not an easy topic - generating converged sampling to assess
whether an
  integration scheme correctly samples a complex observable is still a
  non-trivial matter. That needs to happen before questions of how much
  algorithmic energy drift is acceptable can be satisfactorily addressed.
  Until then, claims of my energy conservation is better than yours
need to
  be considered alongside my number of independent converged-ensemble
  samples is better than yours.
 
   And, sorry for replying this late, I have been installing gromacs on
aix
   for a week.
   Compiling gcc took 3 days of computer time.
  
 
  Seriously, don't bother. I don't think there is any system that would
have
  AIX, with no gcc package available, and which GROMACS 5.0 would run
  decently on (which would require SIMD support, which currently means
x86 or
  BlueGene/Q). I'd guess your laptop will get equivalent performance.
 
  Mark
 
 
  
   On Thu, Aug 14, 2014 at 2:29 PM, Mark Abraham 
mark.j.abra...@gmail.com
   wrote:
  
On Wed, Aug 13, 2014 at 1:19 PM, Johnny Lu johnny.lu...@gmail.com
   wrote:
   
 Hi again.

 Some of my friends said that gromacs had lower numerical stability
  than
 amber, and occasionally has mysterious error in energy.

   
Show us a result and we'll discuss fixing it ;-)
   
   
 Is that still true? Does the implementation of Integrator cause
more
 resonance effect?

   
Any numerical software can be used poorly. Any numerical software
can
   have
bugs. Give them the same input to two implementations of the same
   algorithm
and they should give results that are similar (whatever that means
in
  the
problem domain).
   
   
 I am trying to run NVE simulation with the single precision
version,
   and
 

Re: [gmx-users] Job Submission script for Multiple nodes GPUs

2014-08-22 Thread Xingcheng Lin
There will be  an error if I did
 mpiexec -np 4 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
-gpu_id 01 -nb gpu -ntomp 6

Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.
mdrun_mpi was started with 4 PP MPI processes per node, but you provided 2
GPUs.

Somehow the system still considers it as a single node.

Do you know how to solve it?


 Good afternoon,

 I am trying to use multiple nodes to do GPU simulation, each node has two
 GPUs and 12 CPUs mounted. Is there any submission script for doing that?

 For single node I used:

 mpiexec -np 2 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
 -gpu_id 01 -nb gpu -ntomp 6

 For 2 nodes I cannot use the script like

 mpiexec -np 4 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
 -gpu_id 0123 -nb gpu -ntomp 6
The -gpu_id string refers to the GPU id?s _per node_, so you should also
use -gpu_id 01 on two and more of these nodes.

Best,
  Carsten
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Job Submission script for Multiple nodes GPUs

2014-08-22 Thread Carsten Kutzner

On 22 Aug 2014, at 18:09, Xingcheng Lin linxingcheng50...@gmail.com wrote:

 There will be  an error if I did
 mpiexec -np 4 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
 -gpu_id 01 -nb gpu -ntomp 6
 
 Incorrect launch configuration: mismatching number of PP MPI processes and
 GPUs per node.
 mdrun_mpi was started with 4 PP MPI processes per node, but you provided 2
 GPUs.
 
 Somehow the system still considers it as a single node.
Then your MPI processes are not started correctly across the two
nodes.

 
 Do you know how to solve it?
That depends on the MPI library you are using. Probably you need to specify
some kind of a hostfile  or machinefile with the names of all nodes.

Carsten

 
 
 
 Good afternoon,
 
 I am trying to use multiple nodes to do GPU simulation, each node has two
 GPUs and 12 CPUs mounted. Is there any submission script for doing that?
 
 For single node I used:
 
 mpiexec -np 2 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
 -gpu_id 01 -nb gpu -ntomp 6
 
 For 2 nodes I cannot use the script like
 
 mpiexec -np 4 mdrun_mpi -s run.tpr -cpi state.cpt -cpo state.cpt -noappend
 -gpu_id 0123 -nb gpu -ntomp 6
 The -gpu_id string refers to the GPU id?s _per node_, so you should also
 use -gpu_id 01 on two and more of these nodes.
 
 Best,
  Carsten
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Elastic Network Model

2014-08-22 Thread shivangi nangia
Hello All,

I have a quick question regarding Elastic Network model.

If the define RUBBER BANDS option is not stated in the .top file, the .itp
file (with conditions for if elastic network is on) is same as if it was
made with without any elastic network?

Please guide.

Thanks,
sxn
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Elastic Network Model

2014-08-22 Thread Tsjerk Wassenaar
Hi sxn,

Rubber bands... :) So you're talking about a CG topology. If RUBBER_BANDS
is not defined, the network is not active. However, they can also be
defined in the .mdp file. Furthermore, if you're talking about ElneDyn,
then the model is not like regular Martini if you disable the elastic
network.

Hope it helps,

Tsjerk
On Aug 22, 2014 7:20 PM, shivangi nangia shivangi.nan...@gmail.com
wrote:

 Hello All,

 I have a quick question regarding Elastic Network model.

 If the define RUBBER BANDS option is not stated in the .top file, the .itp
 file (with conditions for if elastic network is on) is same as if it was
 made with without any elastic network?

 Please guide.

 Thanks,
 sxn
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] (no subject)

2014-08-22 Thread Tsjerk Wassenaar
Hi Suriyanarayanan,

You can check out the WeNMR Gromacs portal.

Hope it helps,

Tsjerk
On Aug 22, 2014 10:29 AM, Balasubramanian Suriyanarayanan 
bsns...@gmail.com wrote:

 Dear Users,

  generally for running a 10 ns simulation is there any online server
 facility available. This will be helpful for people who do not have a
 continuous access to the server.

 regards
 Suriyanarayanan
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] build membrane system from Charmm-Gui

2014-08-22 Thread Xiang Ning
Hi All, 

I am using POPC/POPG mixed membrane built by Charmm-gui. After remove ions and 
water from CHARMMGUI pdb and save just the membrane pdb, and use this pdb with 
pdb2gmx to get top file (I used charmm36), then I would like to know, how to 
add water and ions back to the system? I read the previous solution was to 
modify [molecules] in top file (add ions and waters information manually), are 
there any detailed explanation of how to do that? 

Thanks very much!!

Best,
Ning
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] build membrane system from Charmm-Gui

2014-08-22 Thread Justin Lemkul



On 8/22/14, 4:36 PM, Xiang Ning wrote:

Hi All,

I am using POPC/POPG mixed membrane built by Charmm-gui. After remove ions and 
water from CHARMMGUI pdb and save just the membrane pdb, and use this pdb with 
pdb2gmx to get top file (I used charmm36), then I would like to know, how to 
add water and ions back to the system? I read the previous solution was to 
modify [molecules] in top file (add ions and waters information manually), are 
there any detailed explanation of how to do that?



For the coordinates, just paste the water and ion coordinates back into the 
membrane-only file.  For the topology, indeed all you need to do is modify 
[molecules] in the .top to reflect however many waters and ions there are in the 
reconstructed system.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.