Re: [gmx-users] High Pressure with Martini system

2014-04-25 Thread Mirco Wahab

On 25.04.2014 00:07, Ricardo O. S. Soares wrote:

I ran an NVT equilibration of a 24mi CG Martini atoms and detected the 
formation of vacuum spots on the solvent.


CG Martini has many sphere types, did you solvate your system
with 24x10^6 MARTINI-W segments?

MARTINI-W is is strongly association LJ-fluid. In large, inhomogeneous
systems (non-optimal box solvation) it is expected to generate huge
forces if either too-close or not close enough packed.


However, the log file shows a very large negative pressure (about -4e+28) all 
the time, since the beginning.
Despite the negative pressure, when I turn pressure coupling ON, the very first 
step causes the box to expand (!) too much and then the simulation crashes.


there you are ...


Things I tried, and failed: changed the tau_p to several values (1 to 500); 
changed the compressibility; changed the ref_p from 1 to values closer to the 
starting pressure; changed time-step from 0.02 up to 0.0005.
Additional info: used the insane.py script to solvate the box with polarizable 
water; there's a large (frozen) bilayer at the center of the box, It, however 
is smaller than the box sides, to it doesn't interact with itself via PBC.
I'm sure I could provide more info, so please ask me for the specifics and I'll 
reply.


How large is the box, is it cubic? How many Lipids in the bilayer?
What do you expect from the frozen bilayer if you pressure-couple
your box dimensions?

Be warned that the  MARTINI-Lipids constitute bilayers which are
very floppy and spontaneously fold into spheres (vesicles) if
the aggregate has more than around 600 Molecules.

BTW.: what hardware are you on if you expect to equilibrate
a system of that size (if the 2.4x10^6 figure is correct?)

Regards

M.

Disclaimer: I did simulate similar MARTINI systems of 1/10th the
size of yours

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] High Pressure with Martini system

2014-04-25 Thread Tsjerk Wassenaar
Hi Ricardo,

Insanely built systems with polarizable watets may be tricky, especially if
the system is large (not to say humongous). Maybe it's worth adding a stage
using normal Martini water, let that equilibrate (NVT and NpT), and then
add in the polarizable sites, letting the system equilibrate again (NVT,
since the system size should pretty much match the equilibrium pressure
density).

Cheers,

Tsjerk
On Apr 25, 2014 8:06 AM, Mirco Wahab mirco.wa...@chemie.tu-freiberg.de
wrote:

 On 25.04.2014 00:07, Ricardo O. S. Soares wrote:

 I ran an NVT equilibration of a 24mi CG Martini atoms and detected the
 formation of vacuum spots on the solvent.


 CG Martini has many sphere types, did you solvate your system
 with 24x10^6 MARTINI-W segments?

 MARTINI-W is is strongly association LJ-fluid. In large, inhomogeneous
 systems (non-optimal box solvation) it is expected to generate huge
 forces if either too-close or not close enough packed.

  However, the log file shows a very large negative pressure (about -4e+28)
 all the time, since the beginning.
 Despite the negative pressure, when I turn pressure coupling ON, the very
 first step causes the box to expand (!) too much and then the simulation
 crashes.


 there you are ...

  Things I tried, and failed: changed the tau_p to several values (1 to
 500); changed the compressibility; changed the ref_p from 1 to values
 closer to the starting pressure; changed time-step from 0.02 up to 0.0005.
 Additional info: used the insane.py script to solvate the box with
 polarizable water; there's a large (frozen) bilayer at the center of the
 box, It, however is smaller than the box sides, to it doesn't interact with
 itself via PBC.
 I'm sure I could provide more info, so please ask me for the specifics
 and I'll reply.


 How large is the box, is it cubic? How many Lipids in the bilayer?
 What do you expect from the frozen bilayer if you pressure-couple
 your box dimensions?

 Be warned that the  MARTINI-Lipids constitute bilayers which are
 very floppy and spontaneously fold into spheres (vesicles) if
 the aggregate has more than around 600 Molecules.

 BTW.: what hardware are you on if you expect to equilibrate
 a system of that size (if the 2.4x10^6 figure is correct?)

 Regards

 M.

 Disclaimer: I did simulate similar MARTINI systems of 1/10th the
 size of yours

 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] NPT simulation stage

2014-04-25 Thread Justin Lemkul



On 4/25/14, 2:36 AM, Mahboobeh Eslami wrote:

hi GMX users
please help me
i want to simulate protein-ligand complex by gromacs4.6.5
when i repeat NPT simulation stage  many times,  simulation results are very 
different for the pressure average.I sincerely thank you for your guidance



Pressure is probably the most ill-defined quantity in MD simulations.  It can 
fluctuate dramatically.  Without the actual numbers, it's hard to comment, but 
issues related to pressure have been discussed extensively over the list in the 
past and there is useful information in 
http://www.gromacs.org/Documentation/Terminology/Pressure.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] MD simullations with Morse potential

2014-04-25 Thread fantasticqhl
Dear All,


I am trying to do a simulations with morse potential for bonds, I can do the
minimization of the system, but I can not do the MD optimization without
lincs constraints on bonds since using Morse potential with constraints is
useless.

The error was shown as Segmentation fault (core dumped) , I did not found
some other information in the log file. The system was well minimized before
doing md optimization. Here is my mdp file:

##
title   = OPLS Lysozyme NPT equilibration 
;define  = -DPOSRES  ; position restrain the protein
; Run parameters
integrator   = md  ; leap-frog integrator
nsteps   = 10; 2 * 5 = 100 ps
dt  = 0.0005; 2 fs
; Output control
nstxout   = 2500; save coordinates every 0.2 ps
nstvout= 2500; save velocities every 0.2 ps
nstenergy= 2500  ; save energies every 0.2 ps
nstlog = 2500 ; update log file every 0.2 ps
; Bond parameters
continuation= no ; Restarting after NVT 
;constraint_algorithm = lincs  ; holonomic constraints 
;constraints = all-bonds ; all bonds (even heavy
atom-H bonds) constrained
;lincs_iter= 1  ; accuracy of LINCS
;lincs_order = 4   ; also related to
accuracy
; Neighborsearching
ns_type  = grid   ; search neighboring
grid cells
nstlist = 5 ; 10 fs
rlist= 1.2  ; short-range
neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range
electrostatic cutoff (in nm)
rvdw   = 1.2  ; short-range van der
Waals cutoff (in nm)
; Electrostatics
coulombtype   = PME;
Particle Mesh Ewald for long-range electrostatics
pme_order  = 4   ; cubic
interpolation
fourierspacing = 0.16   ; grid
spacing for FFT
; Temperature coupling is on
tcoupl   = V-rescale ; modified
Berendsen thermostat
tc-grps  = Protein Non-Protein   ;
two coupling groups - more accurate
tau_t= 0.1   0.1 
; time constant, in ps
ref_t = 300   300   
 
; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman;
Pressure coupling on in NPT
pcoupltype= isotropic   
 
; uniform scaling of box vectors
tau_p   = 2.0   
 
; time constant, in ps
ref_p= 1.0  
 
; reference pressure, in bar
compressibility = 4.5e-5 ;
isothermal compressibility of water, bar^-1
refcoord_scaling = com
; Periodic boundary conditions
pbc  = xyz 
; 3-D PBC
; Dispersion correction
DispCorr  = EnerPres
  
; account for cut-off vdW scheme
; Velocity generation
gen_vel= yes
 
; Velocity generation is off 
gen_temp= 300   
   
; temperature for Maxwell distribution
gen_seed= 173529
   
; generate a random seed
morse = yes



Is there some special rule for md simulations with morse potential? But I
did not found it in the manual. Could someone give some suggestions? Any
response will be highly appreciated!


All the best,

Qinghua



--
View this message in context: 
http://gromacs.5086.x6.nabble.com/MD-simullations-with-Morse-potential-tp5016062.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] compilation issues with GCC/clang/CUDA on OSX

2014-04-25 Thread Anders Gabrielsson
Hi!
Note that the warnings are only suggesting that your build may give suboptimal 
performance. (as in having no compatible GPU or non-openmp enabled compiler).
In my experience, you can actually ignore the first one about not finding GPUs. 
That never worked for me either (on mac os x).

Did you try proceed to 'make' anything?
The openmp-related warning suggest to me that either cmake is confused about 
what compiler to use (clear out the build dir and try again?) or your gcc 
installation is somehow broken.

Happy compiling!
Anders


On 24 Apr 2014, at 21:41, Stephen N. Floor fl...@berkeley.edu wrote:

 Hello -
 
 I am trying to compile gromacs 4.6.5 with CUDA support for a new
 graphics card on OSX 10.8.5 with CUDA 6.0.37 using gcc 4.8.2 from fink
 (gcc-4 and g++-4) for OpenMP support and clang for CUDA as recommended
 previously.
 
 I am starting cmake like this:
 
 export CC=/sw/bin/gcc-4
 export CXX=/sw/bin/g++-4
 
 cmake -DGMX_GPU=ON -DCMAKE_C_COMPILER=${CC}
 -DCMAKE_CXX_COMPILER=${CXX} -DCUDA_HOST_COMPILER=/usr/bin/clang
 -DCUDA_PROPAGATE_HOST_FLAGS=OFF ..
 
 I am having two issues:
 
 1) cmake does not detect any video cards:
 
 -- Looking for NVIDIA GPUs present in the system
 -- Could not detect NVIDIA GPUs
 -- Found CUDA: /usr/local/cuda (found suitable version 6.0, minimum
 required is 3.2)
 
 While I can likely continue with compilation, this seems to suggest an
 underlying issue that may rear its head later.
 
 2) OpenMP is not detected
 
 { snipped many openmp flag test results that are all failures }
 
 CMake Warning at CMakeLists.txt:273 (message):
  The compiler you are using does not support OpenMP parallelism.  This might
  hurt your performance a lot, in particular with GPUs.  Try using a more
  recent version, or a different compiler.  For now, we are proceeding by
  turning off OpenMP.
 
 compiler info:
 
 [snf@waipio gromacs/gromacs-4.6.5/build-cmake-gpu]$ gcc-4 --version -v
 Using built-in specs.
 COLLECT_GCC=gcc-4
 COLLECT_LTO_WRAPPER=/sw/lib/gcc4.8/libexec/gcc/x86_64-apple-darwin12.5.0/4.8.2/lto-wrapper
 gcc-4 (GCC) 4.8.2
 Copyright (C) 2013 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.  There is NO
 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
 
 
 Target: x86_64-apple-darwin12.5.0
 Configured with: ../gcc-4.8.2/configure --prefix=/sw
 --prefix=/sw/lib/gcc4.8 --mandir=/sw/share/man
 --infodir=/sw/lib/gcc4.8/info
 --enable-languages=c,c++,fortran,lto,objc,obj-c++,java --with-gmp=/sw
 --with-libiconv-prefix=/sw --with-isl=/sw --with-cloog=/sw
 --with-mpc=/sw --with-system-zlib --x-includes=/usr/X11R6/include
 --x-libraries=/usr/X11R6/lib --program-suffix=-fsf-4.8
 Thread model: posix
 gcc version 4.8.2 (GCC)
 COLLECT_GCC_OPTIONS='-mmacosx-version-min=10.8.5' '--version' '-v'
 '-mtune=core2'
 /sw/lib/gcc4.8/libexec/gcc/x86_64-apple-darwin12.5.0/4.8.2/cc1 -quiet
 -v -D__DYNAMIC__ help-dummy -fPIC -quiet -dumpbase help-dummy
 -mmacosx-version-min=10.8.5 -mtune=core2 -auxbase help-dummy -version
 --version -o /var/folders/bj/fl_w4q152yx6cg787vx5c12hgq/T//ccQjwwGf.s
 GNU C (GCC) version 4.8.2 (x86_64-apple-darwin12.5.0)
compiled by GNU C version 4.8.2, GMP version 5.1.3, MPFR
 version 3.1.2, MPC version 1.0.2
 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
 COLLECT_GCC_OPTIONS='-mmacosx-version-min=10.8.5' '--version' '-v'
 '-mtune=core2'
 as -arch x86_64 -force_cpusubtype_ALL --version -o
 
 Thanks for any insight!
 -Stephen
 
 --
 Stephen N. Floor
 HHWF Postdoctoral Fellow, Doudna Group
 http://doudna.berkeley.edu
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] MD simullations with Morse potential

2014-04-25 Thread Justin Lemkul



On 4/25/14, 10:00 AM, fantasticqhl wrote:

Dear All,


I am trying to do a simulations with morse potential for bonds, I can do the
minimization of the system, but I can not do the MD optimization without
lincs constraints on bonds since using Morse potential with constraints is
useless.

The error was shown as Segmentation fault (core dumped) , I did not found
some other information in the log file. The system was well minimized before
doing md optimization. Here is my mdp file:

##
title   = OPLS Lysozyme NPT equilibration
;define  = -DPOSRES  ; position restrain the protein
; Run parameters
integrator   = md  ; leap-frog integrator
nsteps   = 10; 2 * 5 = 100 ps
dt  = 0.0005; 2 fs
; Output control
nstxout   = 2500; save coordinates every 0.2 ps
nstvout= 2500; save velocities every 0.2 ps
nstenergy= 2500  ; save energies every 0.2 ps
nstlog = 2500 ; update log file every 0.2 ps
; Bond parameters
continuation= no ; Restarting after NVT
;constraint_algorithm = lincs  ; holonomic constraints
;constraints = all-bonds ; all bonds (even heavy
atom-H bonds) constrained
;lincs_iter= 1  ; accuracy of LINCS
;lincs_order = 4   ; also related to
accuracy
; Neighborsearching
ns_type  = grid   ; search neighboring
grid cells
nstlist = 5 ; 10 fs
rlist= 1.2  ; short-range
neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range
electrostatic cutoff (in nm)
rvdw   = 1.2  ; short-range van der
Waals cutoff (in nm)
; Electrostatics
coulombtype   = PME;
Particle Mesh Ewald for long-range electrostatics
pme_order  = 4   ; cubic
interpolation
fourierspacing = 0.16   ; grid
spacing for FFT
; Temperature coupling is on
tcoupl   = V-rescale ; modified
Berendsen thermostat
tc-grps  = Protein Non-Protein   ;
two coupling groups - more accurate
tau_t= 0.1   0.1
; time constant, in ps
ref_t = 300   300
; reference temperature, one for each group, in K
; Pressure coupling is on
pcoupl  = Parrinello-Rahman;
Pressure coupling on in NPT
pcoupltype= isotropic
; uniform scaling of box vectors
tau_p   = 2.0
; time constant, in ps
ref_p= 1.0
; reference pressure, in bar
compressibility = 4.5e-5 ;
isothermal compressibility of water, bar^-1
refcoord_scaling = com
; Periodic boundary conditions
pbc  = xyz
; 3-D PBC
; Dispersion correction
DispCorr  = EnerPres
; account for cut-off vdW scheme
; Velocity generation
gen_vel= yes
; Velocity generation is off
gen_temp= 300
; temperature for Maxwell distribution
gen_seed= 173529
; generate a random seed
morse = yes



Is there some special rule for md simulations with morse potential? But I
did not found it in the manual. Could someone give some suggestions? Any
response will be highly appreciated!



The first problem I suspect is the use of the Parrinello-Rahman barostat while 
generating velocities.  That combination often results in instability for any 
system.  Try NVT equilibration before moving to NPT.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] compilation issues with GCC/clang/CUDA on OSX

2014-04-25 Thread Stephen N. Floor
Hi and thanks - 

I did end up ignoring the warning about no compatible GPUs.  The non-openmp 
compiler went away once I finally convinced cmake to actually use gcc-4.8.2 
instead of clang - embarrassingly, it seems I was getting those errors because 
I was running cmake repeatedly without wiping the directory in between.  Making 
a fresh build dir and running cmake as listed below does recognize openmp 
compatibility via gcc, and CUDA things build fine using clang.  After ignoring 
the GPU warning it builds fine and does detect a GPU when I perform 
simulations.  

However, I ran into the Malformed Mach-o file issue that was mentioned by a 
previous poster.  After make finishes, I run make install and receive a huge 
number of errors like this:

CMake Error:
  Error evaluating generator expression:

$LINK_ONLY:-Wl,-rpath

  $LINK_ONLY expression requires exactly one parameter.


CMake Warning (dev) in src/gmxlib/gpu_utils/CMakeLists.txt:
  Policy CMP0022 is not set: INTERFACE_LINK_LIBRARIES defines the link
  interface.  Run cmake --help-policy CMP0022 for policy details.  Use the
  cmake_policy command to set the policy and suppress this warning.

  Static library target gpu_utils has a INTERFACE_LINK_LIBRARIES property.
  This should be preferred as the source of the link interface for this
  library.  Ignoring the property and using the link implementation as the
  link interface instead.
This warning is for project developers.  Use -Wno-dev to suppress it.

If I repeat make install then these errors do not reappear and it does install 
to the specified prefix.  However, executables that are installed in this 
manner are broken: 

[snf@waipio ]$ nice -n 19 mdrun -deffnm npt-10ps
nice: mdrun: Malformed Mach-o file

A Mach-o file is an OSX format executable, similar to ELF.  This error occurs 
for any gromacs package executable that I have tried.  However, if I go into 
the freshly compiled src tree I can run mdrun or anything else just fine, which 
suggests to me that something in the make install process is garping things up.

Anyway, I did find a workaround for this by just using the executables in the 
src tree of the build directory, but thought I should share.

Thanks,
-Stephen

On Apr 25, 2014, at 7:43 AM, Anders Gabrielsson and...@kth.se wrote:

 Hi!
 Note that the warnings are only suggesting that your build may give 
 suboptimal performance. (as in having no compatible GPU or non-openmp enabled 
 compiler).
 In my experience, you can actually ignore the first one about not finding 
 GPUs. That never worked for me either (on mac os x).
 
 Did you try proceed to 'make' anything?
 The openmp-related warning suggest to me that either cmake is confused about 
 what compiler to use (clear out the build dir and try again?) or your gcc 
 installation is somehow broken.
 
 Happy compiling!
 Anders
 
 
 On 24 Apr 2014, at 21:41, Stephen N. Floor fl...@berkeley.edu wrote:
 
 Hello -
 
 I am trying to compile gromacs 4.6.5 with CUDA support for a new
 graphics card on OSX 10.8.5 with CUDA 6.0.37 using gcc 4.8.2 from fink
 (gcc-4 and g++-4) for OpenMP support and clang for CUDA as recommended
 previously.
 
 I am starting cmake like this:
 
 export CC=/sw/bin/gcc-4
 export CXX=/sw/bin/g++-4
 
 cmake -DGMX_GPU=ON -DCMAKE_C_COMPILER=${CC}
 -DCMAKE_CXX_COMPILER=${CXX} -DCUDA_HOST_COMPILER=/usr/bin/clang
 -DCUDA_PROPAGATE_HOST_FLAGS=OFF ..
 
 I am having two issues:
 
 1) cmake does not detect any video cards:
 
 -- Looking for NVIDIA GPUs present in the system
 -- Could not detect NVIDIA GPUs
 -- Found CUDA: /usr/local/cuda (found suitable version 6.0, minimum
 required is 3.2)
 
 While I can likely continue with compilation, this seems to suggest an
 underlying issue that may rear its head later.
 
 2) OpenMP is not detected
 
 { snipped many openmp flag test results that are all failures }
 
 CMake Warning at CMakeLists.txt:273 (message):
 The compiler you are using does not support OpenMP parallelism.  This might
 hurt your performance a lot, in particular with GPUs.  Try using a more
 recent version, or a different compiler.  For now, we are proceeding by
 turning off OpenMP.
 
 compiler info:
 
 [snf@waipio gromacs/gromacs-4.6.5/build-cmake-gpu]$ gcc-4 --version -v
 Using built-in specs.
 COLLECT_GCC=gcc-4
 COLLECT_LTO_WRAPPER=/sw/lib/gcc4.8/libexec/gcc/x86_64-apple-darwin12.5.0/4.8.2/lto-wrapper
 gcc-4 (GCC) 4.8.2
 Copyright (C) 2013 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.  There is NO
 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
 
 
 Target: x86_64-apple-darwin12.5.0
 Configured with: ../gcc-4.8.2/configure --prefix=/sw
 --prefix=/sw/lib/gcc4.8 --mandir=/sw/share/man
 --infodir=/sw/lib/gcc4.8/info
 --enable-languages=c,c++,fortran,lto,objc,obj-c++,java --with-gmp=/sw
 --with-libiconv-prefix=/sw --with-isl=/sw --with-cloog=/sw
 --with-mpc=/sw --with-system-zlib --x-includes=/usr/X11R6/include
 

Re: [gmx-users] High Pressure with Martini system

2014-04-25 Thread Ricardo O. S. Soares
Hello Mirco, thanks for your reply. Please check my notes below. 

Ricardo. 

De: Mirco Wahab mirco.wa...@chemie.tu-freiberg.de 

 Para: gromacs org gmx-users
 gromacs.org_gmx-users@maillist.sys.kth.se
 Enviadas: Sexta-feira, 25 de Abril de 2014 3:03:18
 Assunto: Re: [gmx-users] High Pressure with Martini system

 On 25.04.2014 00:07, Ricardo O. S. Soares wrote:
  I ran an NVT equilibration of a 24mi CG Martini atoms and detected
  the formation of vacuum spots on the solvent.

 CG Martini has many sphere types, did you solvate your system
 with 24x10^6 MARTINI-W segments?

The whole system has about 24x10⁶ atoms/beads. Since I used PW (polarizable 
water, which has 3 'atoms' -- WM, WP, W), I have about 7.5x10^6 molecules, and 
therefore about 22.5 x 10^6 solvent atoms. 

 MARTINI-W is is strongly association LJ-fluid. In large,
 inhomogeneous
 systems (non-optimal box solvation) it is expected to generate huge
 forces if either too-close or not close enough packed.

  However, the log file shows a very large negative pressure (about
  -4e+28) all the time, since the beginning.
  Despite the negative pressure, when I turn pressure coupling ON,
  the very first step causes the box to expand (!) too much and then
  the simulation crashes.

 there you are ...

I'm just failing to understand why a negative pressure would cause the box to 
expand, instead of contract... 

  Things I tried, and failed: changed the tau_p to several values (1
  to 500); changed the compressibility; changed the ref_p from 1 to
  values closer to the starting pressure; changed time-step from
  0.02 up to 0.0005.
  Additional info: used the insane.py script to solvate the box with
  polarizable water; there's a large (frozen) bilayer at the center
  of the box, It, however is smaller than the box sides, to it
  doesn't interact with itself via PBC.
  I'm sure I could provide more info, so please ask me for the
  specifics and I'll reply.

 How large is the box, is it cubic? How many Lipids in the bilayer?

The box dimensions are 153.32965 102.32965 62.0, and I have XXX lipids. 

 What do you expect from the frozen bilayer if you pressure-couple
 your box dimensions?

The bilayer is smaller than the XY dimensions of the box, so I'd expect it to 
remain frozen until the desired pressure and solvation is reached and then I'd 
remove the freezing code and probably add some weak restrains in the Z axis, 
for further equilibration. 

 Be warned that the MARTINI-Lipids constitute bilayers which are
 very floppy and spontaneously fold into spheres (vesicles) if
 the aggregate has more than around 600 Molecules.

That interest me, do you have a reference for this information? thanks. 

 BTW.: what hardware are you on if you expect to equilibrate
 a system of that size (if the 2.4x10^6 figure is correct?)

For this system I'm using a BlueGene/P an SGI Altix XE 1300 . 

 Regards

 M.

 Disclaimer: I did simulate similar MARTINI systems of 1/10th the
 size of yours

I did simulated a smaller (500K atoms) similar system before with no problems, 
but then again, 24x10^6 isn't 500K ;) 

 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] High Pressure with Martini system

2014-04-25 Thread Ricardo O. S. Soares
Hello Tsjerk, 

You're right, when insanely solvating I sometimes put the water too close or 
too far. Do you know if 0.47 is a good value for the -sold parameter? 
From my experience with Martini-W beads and a small bilayer, I usually find 
that the system tends to freeze. 
However, I think that this happens after the pressure reaches a reasonable 
value, so I could in this moment use the triple-w.py script to replace W for 
PW, right? 
And then, like you said, perform NVT and NpT equilibration. .. 

Thanks for your input, 

Best, 

Ricardo. 

--- 
Biological Chemistry and Physics 
Faculty of Pharmaceutical Sciences at Ribeirão Preto 
University of São Paulo - Brazil 
- Mensagem original -

 De: Tsjerk Wassenaar tsje...@gmail.com
 Para: Discussion list for GROMACS users gmx-us...@gromacs.org
 Enviadas: Sexta-feira, 25 de Abril de 2014 3:31:52
 Assunto: Re: [gmx-users] High Pressure with Martini system

 Hi Ricardo,

 Insanely built systems with polarizable watets may be tricky,
 especially if
 the system is large (not to say humongous). Maybe it's worth adding a
 stage
 using normal Martini water, let that equilibrate (NVT and NpT), and
 then
 add in the polarizable sites, letting the system equilibrate again
 (NVT,
 since the system size should pretty much match the equilibrium
 pressure
 density).

 Cheers,

 Tsjerk
 On Apr 25, 2014 8:06 AM, Mirco Wahab
 mirco.wa...@chemie.tu-freiberg.de
 wrote:

  On 25.04.2014 00:07, Ricardo O. S. Soares wrote:
 
  I ran an NVT equilibration of a 24mi CG Martini atoms and detected
  the
  formation of vacuum spots on the solvent.
 
 
  CG Martini has many sphere types, did you solvate your system
  with 24x10^6 MARTINI-W segments?
 
  MARTINI-W is is strongly association LJ-fluid. In large,
  inhomogeneous
  systems (non-optimal box solvation) it is expected to generate huge
  forces if either too-close or not close enough packed.
 
  However, the log file shows a very large negative pressure (about
  -4e+28)
  all the time, since the beginning.
  Despite the negative pressure, when I turn pressure coupling ON,
  the very
  first step causes the box to expand (!) too much and then the
  simulation
  crashes.
 
 
  there you are ...
 
  Things I tried, and failed: changed the tau_p to several values (1
  to
  500); changed the compressibility; changed the ref_p from 1 to
  values
  closer to the starting pressure; changed time-step from 0.02 up to
  0.0005.
  Additional info: used the insane.py script to solvate the box with
  polarizable water; there's a large (frozen) bilayer at the center
  of the
  box, It, however is smaller than the box sides, to it doesn't
  interact with
  itself via PBC.
  I'm sure I could provide more info, so please ask me for the
  specifics
  and I'll reply.
 
 
  How large is the box, is it cubic? How many Lipids in the bilayer?
  What do you expect from the frozen bilayer if you pressure-couple
  your box dimensions?
 
  Be warned that the MARTINI-Lipids constitute bilayers which are
  very floppy and spontaneously fold into spheres (vesicles) if
  the aggregate has more than around 600 Molecules.
 
  BTW.: what hardware are you on if you expect to equilibrate
  a system of that size (if the 2.4x10^6 figure is correct?)
 
  Regards
 
  M.
 
  Disclaimer: I did simulate similar MARTINI systems of 1/10th the
  size of yours
 
  --
  Gromacs Users mailing list
 
  * Please search the archive at http://www.gromacs.org/
  Support/Mailing_Lists/GMX-Users_List before posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
  or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] High Pressure with Martini system

2014-04-25 Thread Tsjerk Wassenaar
Hi Ricardo,

I think 0.47 should be fine. For normal W you can also increase -solr to
get a bit less ordering, although you need to be careful with large
systems. It should also be possible to add a certain number of antifreeze
particles, but I think you should be fine converting to PW after a short
equilibration with W.

Cheers,

Tsjerk
On Apr 25, 2014 8:54 PM, Ricardo O. S. Soares rsoa...@fcfrp.usp.br
wrote:

 Hello Tsjerk,

 You're right, when insanely solvating I sometimes put the water too close
 or too far. Do you know if 0.47 is a good value for the -sold parameter?
 From my experience with Martini-W beads and a small bilayer, I usually
 find that the system tends to freeze.
 However, I think that this happens after the pressure reaches a reasonable
 value, so I could in this moment use the triple-w.py script to replace W
 for PW, right?
 And then, like you said, perform NVT and NpT equilibration. ..

 Thanks for your input,

 Best,

 Ricardo.

 ---
 Biological Chemistry and Physics
 Faculty of Pharmaceutical Sciences at Ribeirão Preto
 University of São Paulo - Brazil
 - Mensagem original -

  De: Tsjerk Wassenaar tsje...@gmail.com
  Para: Discussion list for GROMACS users gmx-us...@gromacs.org
  Enviadas: Sexta-feira, 25 de Abril de 2014 3:31:52
  Assunto: Re: [gmx-users] High Pressure with Martini system

  Hi Ricardo,

  Insanely built systems with polarizable watets may be tricky,
  especially if
  the system is large (not to say humongous). Maybe it's worth adding a
  stage
  using normal Martini water, let that equilibrate (NVT and NpT), and
  then
  add in the polarizable sites, letting the system equilibrate again
  (NVT,
  since the system size should pretty much match the equilibrium
  pressure
  density).

  Cheers,

  Tsjerk
  On Apr 25, 2014 8:06 AM, Mirco Wahab
  mirco.wa...@chemie.tu-freiberg.de
  wrote:

   On 25.04.2014 00:07, Ricardo O. S. Soares wrote:
  
   I ran an NVT equilibration of a 24mi CG Martini atoms and detected
   the
   formation of vacuum spots on the solvent.
  
  
   CG Martini has many sphere types, did you solvate your system
   with 24x10^6 MARTINI-W segments?
  
   MARTINI-W is is strongly association LJ-fluid. In large,
   inhomogeneous
   systems (non-optimal box solvation) it is expected to generate huge
   forces if either too-close or not close enough packed.
  
   However, the log file shows a very large negative pressure (about
   -4e+28)
   all the time, since the beginning.
   Despite the negative pressure, when I turn pressure coupling ON,
   the very
   first step causes the box to expand (!) too much and then the
   simulation
   crashes.
  
  
   there you are ...
  
   Things I tried, and failed: changed the tau_p to several values (1
   to
   500); changed the compressibility; changed the ref_p from 1 to
   values
   closer to the starting pressure; changed time-step from 0.02 up to
   0.0005.
   Additional info: used the insane.py script to solvate the box with
   polarizable water; there's a large (frozen) bilayer at the center
   of the
   box, It, however is smaller than the box sides, to it doesn't
   interact with
   itself via PBC.
   I'm sure I could provide more info, so please ask me for the
   specifics
   and I'll reply.
  
  
   How large is the box, is it cubic? How many Lipids in the bilayer?
   What do you expect from the frozen bilayer if you pressure-couple
   your box dimensions?
  
   Be warned that the MARTINI-Lipids constitute bilayers which are
   very floppy and spontaneously fold into spheres (vesicles) if
   the aggregate has more than around 600 Molecules.
  
   BTW.: what hardware are you on if you expect to equilibrate
   a system of that size (if the 2.4x10^6 figure is correct?)
  
   Regards
  
   M.
  
   Disclaimer: I did simulate similar MARTINI systems of 1/10th the
   size of yours
  
   --
   Gromacs Users mailing list
  
   * Please search the archive at http://www.gromacs.org/
   Support/Mailing_Lists/GMX-Users_List before posting!
  
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
   * For (un)subscribe requests visit
   https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
   or
   send a mail to gmx-users-requ...@gromacs.org.
  
  --
  Gromacs Users mailing list

  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!

  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 

Re: [gmx-users] MD simullations with Morse potential

2014-04-25 Thread fantasticqhl
Dear Justin,

Thanks very much for your reply!


I guess the use of the Parrinello-Rahman barostat should not be a problem
because the optimization ran successfully if I turned off the morse
potential. Actually, I have already done the nvt optimization without morse
potential before I moved to npt optimization with morse potential,  the
error was the same for nvt with morse potential. I just do not know why the
morse potential was not working. Do you have some other ideas which may
result in the problem? Thanks very much!


All the best,
Qinghua

--
View this message in context: 
http://gromacs.5086.x6.nabble.com/MD-simullations-with-Morse-potential-tp5016062p5016072.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] MD simullations with Morse potential

2014-04-25 Thread Justin Lemkul



On 4/25/14, 5:37 PM, fantasticqhl wrote:

Dear Justin,

Thanks very much for your reply!


I guess the use of the Parrinello-Rahman barostat should not be a problem
because the optimization ran successfully if I turned off the morse
potential. Actually, I have already done the nvt optimization without morse
potential before I moved to npt optimization with morse potential,  the
error was the same for nvt with morse potential. I just do not know why the
morse potential was not working. Do you have some other ideas which may
result in the problem? Thanks very much!



Well, Morse potentials aren't the functional form with which any of the 
biomolecular force fields in Gromacs were parametrized, so either (1) there's a 
bug that needs to be fixed or (2) you're doing something incompatible and the 
model physics is breaking down.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] MD simullations with Morse potential

2014-04-25 Thread Tsjerk Wassenaar
Hi Qinghua,

Can you run NVT equilibration? Does it work with nstlist set to 1?

Cheers,

Tsjerk
On Apr 26, 2014 12:29 AM, Justin Lemkul jalem...@vt.edu wrote:



 On 4/25/14, 5:37 PM, fantasticqhl wrote:

 Dear Justin,

 Thanks very much for your reply!


 I guess the use of the Parrinello-Rahman barostat should not be a problem
 because the optimization ran successfully if I turned off the morse
 potential. Actually, I have already done the nvt optimization without
 morse
 potential before I moved to npt optimization with morse potential,  the
 error was the same for nvt with morse potential. I just do not know why
 the
 morse potential was not working. Do you have some other ideas which may
 result in the problem? Thanks very much!


 Well, Morse potentials aren't the functional form with which any of the
 biomolecular force fields in Gromacs were parametrized, so either (1)
 there's a bug that needs to be fixed or (2) you're doing something
 incompatible and the model physics is breaking down.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.