[gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Hello all,

I'm trying to know how the surface area of a nano-drop changes during the
evaporation in the vacuum.

When I filter the trajectory of non-evaporated molecules by trjconv and use
g_sas for calculation of their surface, it usually crash (I'm using version
of 4.5.5).

Is there still this issue in the 4.6.*? How it can be resolved?


Best
Rasoul
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul



On 7/18/13 6:52 AM, Rasoul Nasiri wrote:

Hello all,

I'm trying to know how the surface area of a nano-drop changes during the
evaporation in the vacuum.

When I filter the trajectory of non-evaporated molecules by trjconv and use
g_sas for calculation of their surface, it usually crash (I'm using version
of 4.5.5).



What's the error message?  What's your command?


Is there still this issue in the 4.6.*? How it can be resolved?



Have you tried version 4.6?  That's the quickest way to know.  If there are 
still problems, you'll need to provide more information.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Bellow are commands and error message:

1-   trjconv_d -f traj.xtc -n maxclust.ndx -o traj_out.xtc

2-g_sas_d  -f  traj_out.xtc  -n maxclust.ndx  -o surface.xvg  -nopbc


glibc detected *** g_sas_d: malloc(): memory corruption: 0x016dfcd0


Rasoul


On Thu, Jul 18, 2013 at 1:01 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 7/18/13 6:52 AM, Rasoul Nasiri wrote:

 Hello all,

 I'm trying to know how the surface area of a nano-drop changes during the
 evaporation in the vacuum.

 When I filter the trajectory of non-evaporated molecules by trjconv and
 use
 g_sas for calculation of their surface, it usually crash (I'm using
 version
 of 4.5.5).


 What's the error message?  What's your command?


  Is there still this issue in the 4.6.*? How it can be resolved?


 Have you tried version 4.6?  That's the quickest way to know.  If there
 are still problems, you'll need to provide more information.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu |
 (410) 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
Justin,

I just ran this calculations on VERSION 4.6-GPU-dev-20120501-ec56c and I
will let you know about the outcomes.

Rasoul


On Thu, Jul 18, 2013 at 1:09 PM, Rasoul Nasiri nasiri1...@gmail.com wrote:

 Bellow are commands and error message:

 1-   trjconv_d -f traj.xtc -n maxclust.ndx -o traj_out.xtc

 2-g_sas_d  -f  traj_out.xtc  -n maxclust.ndx  -o surface.xvg  -nopbc


 glibc detected *** g_sas_d: malloc(): memory corruption:
 0x016dfcd0


 Rasoul


 On Thu, Jul 18, 2013 at 1:01 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 7/18/13 6:52 AM, Rasoul Nasiri wrote:

 Hello all,

 I'm trying to know how the surface area of a nano-drop changes during the
 evaporation in the vacuum.

 When I filter the trajectory of non-evaporated molecules by trjconv and
 use
 g_sas for calculation of their surface, it usually crash (I'm using
 version
 of 4.5.5).


 What's the error message?  What's your command?


  Is there still this issue in the 4.6.*? How it can be resolved?


 Have you tried version 4.6?  That's the quickest way to know.  If there
 are still problems, you'll need to provide more information.

 -Justin

 --
 ==**

 Justin A. Lemkul, Ph.D.
 Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu| 
 (410) 706-7441

 ==**
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists



-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul



On 7/18/13 8:04 AM, Rasoul Nasiri wrote:

Justin,

I just ran this calculations on VERSION 4.6-GPU-dev-20120501-ec56c and I
will let you know about the outcomes.



The outcome of 4.6.3 would be more interesting than an outdated development 
version.

-Justin


Rasoul


On Thu, Jul 18, 2013 at 1:09 PM, Rasoul Nasiri nasiri1...@gmail.com wrote:


Bellow are commands and error message:

1-   trjconv_d -f traj.xtc -n maxclust.ndx -o traj_out.xtc

2-g_sas_d  -f  traj_out.xtc  -n maxclust.ndx  -o surface.xvg  -nopbc


glibc detected *** g_sas_d: malloc(): memory corruption:
0x016dfcd0


Rasoul


On Thu, Jul 18, 2013 at 1:01 PM, Justin Lemkul jalem...@vt.edu wrote:




On 7/18/13 6:52 AM, Rasoul Nasiri wrote:


Hello all,

I'm trying to know how the surface area of a nano-drop changes during the
evaporation in the vacuum.

When I filter the trajectory of non-evaporated molecules by trjconv and
use
g_sas for calculation of their surface, it usually crash (I'm using
version
of 4.5.5).



What's the error message?  What's your command?


  Is there still this issue in the 4.6.*? How it can be resolved?




Have you tried version 4.6?  That's the quickest way to know.  If there
are still problems, you'll need to provide more information.

-Justin

--
==**

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul@outerbanks.umaryland.**edu jalem...@outerbanks.umaryland.edu| (410) 
706-7441

==**
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/**
Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
 posting!
* Please don't post (un)subscribe requests to the list. Use the www
interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists






--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] meaning of results of g_hbond -ac

2013-07-18 Thread Erik Marklund
* Time
* Ac(hbond) with correction for the fact that a finite system is being 
simulated.
* Ac(hbond) without correction
* Cross correlation between hbonds and contacts (see the papers by 
LuzarChandler and van der Spoel that are mentioned in the stdout from g_hbond)
* Derivative of second column.

On 18 Jul 2013, at 04:51, Wu Chaofu xiaowu...@gmail.com wrote:

 Dear gmxers,
 By running the command g_hbond -ac, a resulting .xvg file is generated,
 which is attached below. In that file, there are five columns. I guess
 that, the first column is time, the second the HB autocorrelation function.
 But what are the other columns denoted by s1, s2, s3? Thanks a lot for any
 reply.
 Yours sincerely,
 Chaofu Wu
 
 # This file was created Wed Jul 17 09:49:34 2013
 # by the following command:
 # g_hbond -f iconf.xtc -s conf.tpr -n -ac ihbac2.xvg
 #
 # g_hbond is part of G R O M A C S:
 #
 # GROtesk MACabre and Sinister
 #
 @title Hydrogen Bond Autocorrelation
 @xaxis  label Time (ps)
 @yaxis  label C(t)
 @TYPE xy
 @ view 0.15, 0.15, 0.75, 0.85
 @ legend on
 @ legend box on
 @ legend loctype view
 @ legend 0.78, 0.8
 @ legend length 2
 @ s0 legend Ac\sfin sys\v{}\z{}(t)
 @ s1 legend Ac(t)
 @ s2 legend Cc\scontact,hb\v{}\z{}(t)
 @ s3 legend -dAc\sfs\v{}\z{}/dt
 0   1   1  -5.79228e-110.902311
 1   0.09955770.1001860.1345570.455234
 2   0.0895326   0.09016760.126435  0.00815676
 3   0.0832442   0.08388360.123311  0.00541426
 4   0.0787041   0.07934660.117936  0.00376486
 5   0.0757145   0.07635910.116489  0.00198866
 6   0.0747267   0.07537210.112655  0.00177937
 7   0.0721557   0.07280290.109565  0.00255146
 8   0.0696238   0.07027270.107288  0.00241577
 9   0.0673242   0.06797470.106287  0.00157028
10   0.0664832   0.06713430.104746   0.0014926
110.064339   0.06499160.101947  0.00148105
12   0.0635211   0.0641743   0.0991141  0.00131835
13   0.0617023   0.06235670.101322  0.00147757
140.060566   0.0612212   0.0965892  0.000719808
 .
 -- 
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the 
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Rasoul Nasiri
The error message using 6.4:
--
*** glibc detected *** g_sas_d: malloc(): memory corruption:
0x0065c8b0 ***
=== Backtrace: =
/lib64/libc.so.6(+0x75018)[0x7f3b97442018]
/lib64/libc.so.6(+0x77fff)[0x7f3b97444fff]
/lib64/libc.so.6(__libc_calloc+0xc8)[0x7f3b974466b8]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmx_d.so.6(save_calloc
+0x46)[0x7f3b97e97dc6]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmxana_d.so.6(sas
_plot+0x4e1)[0x7f3b990ecb61]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmxana_d.so.6(gmx_sas
+0x4be)[0x7f3b990f0a7e]
g_sas_d(main+0x9)[0x400899]
/lib64/libc.so.6(__libc_start_main+0xe6)[0x7f3b973ebbc6]
g_sas_d[0x4007c9]
=== Memory map: 
0040-00401000 r-xp  00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
0060-00601000 r--p  00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
00601000-00602000 rw-p 1000 00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
00602000-00691000 rw-p  00:00 0
[heap]
7f3b9000-7f3b90021000 rw-p  00:00 0
7f3b90021000-7f3b9400 ---p  00:00 0
7f3b96da-7f3b96db6000 r-xp  00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96db6000-7f3b96fb5000 ---p 00016000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb5000-7f3b96fb6000 r--p 00015000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb6000-7f3b96fb7000 rw-p 00016000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb7000-7f3b971b7000 rw-p  00:00 0
7f3b971b7000-7f3b971cc000 r-xp  00:0f 2285660
/lib64/libz.so.1.2.3
7f3b971cc000-7f3b973cb000 ---p 00015000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cb000-7f3b973cc000 r--p 00014000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cc000-7f3b973cd000 rw-p 00015000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cd000-7f3b97522000 r-xp  00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97522000-7f3b97721000 ---p 00155000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97721000-7f3b97725000 r--p 00154000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97725000-7f3b97726000 rw-p 00158000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97726000-7f3b9772b000 rw-p  00:00 0
7f3b9772b000-7f3b97742000 r-xp  00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97742000-7f3b97942000 ---p 00017000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97942000-7f3b97943000 r--p 00017000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97943000-7f3b97944000 rw-p 00018000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97944000-7f3b97948000 rw-p  00:00 0
7f3b97948000-7f3b9799d000 r-xp  00:0f 2285808
/lib64/libm-2.11.1.so
7f3b9799d000-7f3b97b9c000 ---p 00055000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9c000-7f3b97b9d000 r--p 00054000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9d000-7f3b97b9e000 rw-p 00055000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9e000-7f3b97ba r-xp  00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97ba-7f3b97da ---p 2000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da-7f3b97da1000 r--p 2000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da1000-7f3b97da2000 rw-p 3000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da2000-7f3b98233000 r-xp  00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98233000-7f3b98433000 ---p 00491000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98433000-7f3b9843a000 r--p 00491000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b9843a000-7f3b98445000 rw-p 00498000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98445000-7f3b98446000 rw-p  00:00 0
7f3b98446000-7f3b98598000 r-xp  00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b98598000-7f3b98797000 ---p 00152000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b98797000-7f3b9879f000 r--p 00151000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b9879f000-7f3b987a1000 rw-p 00159000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b987a1000-7f3b987a2000 rw-p  00:00 0
7f3b987a2000-7f3b98989000 r-xp  00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98989000-7f3b98b89000 ---p 001e7000 00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98b89000-7f3b98b9a000 rw-p 001e7000 00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98b9a000-7f3b98dbc000 r-xp  00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98dbc000-7f3b98fbb000 ---p 00222000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbb000-7f3b98fbc000 r--p 00221000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbc000-7f3b98fbe000 rw-p 00222000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbe000-7f3b99203000 r-xp  00:19 2309614127

Re: [gmx-users] glibc detected *** g_sas_d

2013-07-18 Thread Justin Lemkul



On 7/18/13 11:52 AM, Rasoul Nasiri wrote:

The error message using 6.4:
--
*** glibc detected *** g_sas_d: malloc(): memory corruption:
0x0065c8b0 ***
=== Backtrace: =
/lib64/libc.so.6(+0x75018)[0x7f3b97442018]
/lib64/libc.so.6(+0x77fff)[0x7f3b97444fff]
/lib64/libc.so.6(__libc_calloc+0xc8)[0x7f3b974466b8]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmx_d.so.6(save_calloc
+0x46)[0x7f3b97e97dc6]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmxana_d.so.6(sas
_plot+0x4e1)[0x7f3b990ecb61]
/usr/local/packages/gmx/4.6.0-phase3/lib/libgmxana_d.so.6(gmx_sas
+0x4be)[0x7f3b990f0a7e]
g_sas_d(main+0x9)[0x400899]
/lib64/libc.so.6(__libc_start_main+0xe6)[0x7f3b973ebbc6]
g_sas_d[0x4007c9]
=== Memory map: 
0040-00401000 r-xp  00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
0060-00601000 r--p  00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
00601000-00602000 rw-p 1000 00:19 2829145631
/nfs01/y07/y07/gmx/4.6.0-phase3/bin/g_sas_d
00602000-00691000 rw-p  00:00 0
[heap]
7f3b9000-7f3b90021000 rw-p  00:00 0
7f3b90021000-7f3b9400 ---p  00:00 0
7f3b96da-7f3b96db6000 r-xp  00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96db6000-7f3b96fb5000 ---p 00016000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb5000-7f3b96fb6000 r--p 00015000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb6000-7f3b96fb7000 rw-p 00016000 00:0f 2285586
/lib64/libgcc_s.so.1
7f3b96fb7000-7f3b971b7000 rw-p  00:00 0
7f3b971b7000-7f3b971cc000 r-xp  00:0f 2285660
/lib64/libz.so.1.2.3
7f3b971cc000-7f3b973cb000 ---p 00015000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cb000-7f3b973cc000 r--p 00014000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cc000-7f3b973cd000 rw-p 00015000 00:0f 2285660
/lib64/libz.so.1.2.3
7f3b973cd000-7f3b97522000 r-xp  00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97522000-7f3b97721000 ---p 00155000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97721000-7f3b97725000 r--p 00154000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97725000-7f3b97726000 rw-p 00158000 00:0f 2285709
/lib64/libc-2.11.1.so
7f3b97726000-7f3b9772b000 rw-p  00:00 0
7f3b9772b000-7f3b97742000 r-xp  00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97742000-7f3b97942000 ---p 00017000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97942000-7f3b97943000 r--p 00017000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97943000-7f3b97944000 rw-p 00018000 00:0f 2285826
/lib64/libpthread-2.11.1.so
7f3b97944000-7f3b97948000 rw-p  00:00 0
7f3b97948000-7f3b9799d000 r-xp  00:0f 2285808
/lib64/libm-2.11.1.so
7f3b9799d000-7f3b97b9c000 ---p 00055000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9c000-7f3b97b9d000 r--p 00054000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9d000-7f3b97b9e000 rw-p 00055000 00:0f 2285808
/lib64/libm-2.11.1.so
7f3b97b9e000-7f3b97ba r-xp  00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97ba-7f3b97da ---p 2000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da-7f3b97da1000 r--p 2000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da1000-7f3b97da2000 rw-p 3000 00:0f 2285806
/lib64/libdl-2.11.1.so
7f3b97da2000-7f3b98233000 r-xp  00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98233000-7f3b98433000 ---p 00491000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98433000-7f3b9843a000 r--p 00491000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b9843a000-7f3b98445000 rw-p 00498000 00:19 1503619150
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libgmx_d.so.6
7f3b98445000-7f3b98446000 rw-p  00:00 0
7f3b98446000-7f3b98598000 r-xp  00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b98598000-7f3b98797000 ---p 00152000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b98797000-7f3b9879f000 r--p 00151000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b9879f000-7f3b987a1000 rw-p 00159000 00:0f 3760217/
usr/lib64/libxml2.so.2.7.6
7f3b987a1000-7f3b987a2000 rw-p  00:00 0
7f3b987a2000-7f3b98989000 r-xp  00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98989000-7f3b98b89000 ---p 001e7000 00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98b89000-7f3b98b9a000 rw-p 001e7000 00:0f 8565593
/opt/fftw/3.3.0.0/x86_64/lib/libfftw3.so.3.3.0
7f3b98b9a000-7f3b98dbc000 r-xp  00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98dbc000-7f3b98fbb000 ---p 00222000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbb000-7f3b98fbc000 r--p 00221000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbc000-7f3b98fbe000 rw-p 00222000 00:19 3521599651
/nfs01/y07/y07/gmx/4.6.0-phase3/lib/libmd_d.so.6
7f3b98fbe000-7f3b99203000 r-xp 

[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Users:

I have a system with water and a drug (54 total atoms; 27 heavy atoms). The 
system is stable when I simulate it for 1 ns. However, Once I add the following 
options to the .mdp file, the run dies after a few ps with a segfault. 

free-energy = yes
init-lambda = 1
couple-lambda0 = vdw-q
couple-lambda1 = none
couple-intramol = no
couple-moltype = drug

I do not get any step files or any lincs warnings. If I look at the .xtc and 
.edr files, there is no indication of something blowing up before the segfault. 
I have also verified that the drug runs without any problem in vacuum. I get 
the same behaviour if I remove constraints and use a timestep of 0.5 fs. The 
segfault is reproducible with v4.6.1 and v4.6.3. I am using the charmm FF, but 
I converted all UB angles in my drug to type-1 angles and still got the 
segfault. I also get the segfault with particle decomposition and/or while 
running a single thread. I am currently using the SD integrator, but I get the 
same segfault with md and md-vv. Couple-intramol=yes doesn't resolve it, 
neither does using separate T-coupling groups for the water and drug. Neither 
does turning off pressure coupling.

Here is the .mdp file that works fine, but gives me a segfault when I add the 
free energy stuff (above):

constraints = all-bonds
lincs-iter =  1
lincs-order =  6
constraint_algorithm =  lincs
integrator = sd
dt = 0.002
tinit = 0
nsteps = 10
nstcomm = 1
nstxout = 0
nstvout = 0
nstfout = 0
nstxtcout = 500
nstenergy = 500
nstlist = 10
nstlog=0 ; reduce log file size
ns_type = grid
vdwtype = cut-off
rlist = 0.8
rvdw = 0.8
rcoulomb = 0.8
coulombtype = cut-off
tc_grps =  System
tau_t   =  1.0
ld_seed =  -1
ref_t = 310
gen_temp = 310
gen_vel = yes
unconstrained_start = no
gen_seed = -1
Pcoupl = berendsen
pcoupltype = isotropic
tau_p = 4 
compressibility = 4.5e-5 
ref_p = 1.0 

I do realize that some of these settings are not ideal for a production run. I 
started with the real Charmm cutoffs + PME, etc, (which also gives the 
segfault) but this is what I am using right now for quick testing.

The only thing keeping me from filing a redmine issue is that if I remove my 
drug and do the FEP on one of the water molecules (using the FEP code listed 
above), I have no segfault. Therefore it is clearly related to the drug, whose 
parameters I built so I may have caused the problem somehow. Nevertheless, the 
drug runs fine in water and in vacuum without the FEP code, so I can't imagine 
what could be causing this segfault (also, the fact that it's a segfault means 
that I don;t get any useful info from mdrun as to what might be going wrong).

Thank you,
Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Michael Shirts
Chris, can you post a redmine on this so I can look at the files?

Also, does it crash immediately, or after a while?

On Thu, Jul 18, 2013 at 2:45 PM, Christopher Neale
chris.ne...@mail.utoronto.ca wrote:
 Dear Users:

 I have a system with water and a drug (54 total atoms; 27 heavy atoms). The 
 system is stable when I simulate it for 1 ns. However, Once I add the 
 following options to the .mdp file, the run dies after a few ps with a 
 segfault.

 free-energy = yes
 init-lambda = 1
 couple-lambda0 = vdw-q
 couple-lambda1 = none
 couple-intramol = no
 couple-moltype = drug

 I do not get any step files or any lincs warnings. If I look at the .xtc and 
 .edr files, there is no indication of something blowing up before the 
 segfault. I have also verified that the drug runs without any problem in 
 vacuum. I get the same behaviour if I remove constraints and use a timestep 
 of 0.5 fs. The segfault is reproducible with v4.6.1 and v4.6.3. I am using 
 the charmm FF, but I converted all UB angles in my drug to type-1 angles and 
 still got the segfault. I also get the segfault with particle decomposition 
 and/or while running a single thread. I am currently using the SD integrator, 
 but I get the same segfault with md and md-vv. Couple-intramol=yes doesn't 
 resolve it, neither does using separate T-coupling groups for the water and 
 drug. Neither does turning off pressure coupling.

 Here is the .mdp file that works fine, but gives me a segfault when I add the 
 free energy stuff (above):

 constraints = all-bonds
 lincs-iter =  1
 lincs-order =  6
 constraint_algorithm =  lincs
 integrator = sd
 dt = 0.002
 tinit = 0
 nsteps = 10
 nstcomm = 1
 nstxout = 0
 nstvout = 0
 nstfout = 0
 nstxtcout = 500
 nstenergy = 500
 nstlist = 10
 nstlog=0 ; reduce log file size
 ns_type = grid
 vdwtype = cut-off
 rlist = 0.8
 rvdw = 0.8
 rcoulomb = 0.8
 coulombtype = cut-off
 tc_grps =  System
 tau_t   =  1.0
 ld_seed =  -1
 ref_t = 310
 gen_temp = 310
 gen_vel = yes
 unconstrained_start = no
 gen_seed = -1
 Pcoupl = berendsen
 pcoupltype = isotropic
 tau_p = 4
 compressibility = 4.5e-5
 ref_p = 1.0

 I do realize that some of these settings are not ideal for a production run. 
 I started with the real Charmm cutoffs + PME, etc, (which also gives the 
 segfault) but this is what I am using right now for quick testing.

 The only thing keeping me from filing a redmine issue is that if I remove my 
 drug and do the FEP on one of the water molecules (using the FEP code listed 
 above), I have no segfault. Therefore it is clearly related to the drug, 
 whose parameters I built so I may have caused the problem somehow. 
 Nevertheless, the drug runs fine in water and in vacuum without the FEP code, 
 so I can't imagine what could be causing this segfault (also, the fact that 
 it's a segfault means that I don;t get any useful info from mdrun as to what 
 might be going wrong).

 Thank you,
 Chris.
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] segfault with an otherwise stable system when I turn on FEP (complete decoupling)

2013-07-18 Thread Christopher Neale
Dear Michael:

I have uploaded them as http://redmine.gromacs.org/issues/1306

It does not crash immediately. The crash is stochastic, giving a segfault 
between 200 and 5000 integration steps. That made me think it was a simple 
exploding system problem, but there are other things (listed in my original 
post) that make me think otherwise. Most notably, the drug is fine in both 
water and vacuum. I have also built numerous systems and get the crash in each 
one. My actual production system also has a protein, but during debugging I 
found that the error persists in a simple and small water solution.

Thank you for your assistance.
Chris.

-- original message --

Chris, can you post a redmine on this so I can look at the files?

Also, does it crash immediately, or after a while?

On Thu, Jul 18, 2013 at 2:45 PM, Christopher Neale
chris.neale at mail.utoronto.ca wrote:
 Dear Users:

 I have a system with water and a drug (54 total atoms; 27 heavy atoms). The 
 system is stable when I simulate it for 1 ns. However, Once I add the 
 following options to the .mdp file, the run dies after a few ps with a 
 segfault.

 free-energy = yes
 init-lambda = 1
 couple-lambda0 = vdw-q
 couple-lambda1 = none
 couple-intramol = no
 couple-moltype = drug

 I do not get any step files or any lincs warnings. If I look at the .xtc and 
 .edr files, there is no indication of something blowing up before the 
 segfault. I have also verified that the drug runs without any problem in 
 vacuum. I get the same behaviour if I remove constraints and use a timestep 
 of 0.5 fs. The segfault is reproducible with v4.6.1 and v4.6.3. I am using 
 the charmm FF, but I converted all UB angles in my drug to type-1 angles and 
 still got the segfault. I also get the segfault with particle decomposition 
 and/or while running a single thread. I am currently using the SD integrator, 
 but I get the same segfault with md and md-vv. Couple-intramol=yes doesn't 
 resolve it, neither does using separate T-coupling groups for the water and 
 drug. Neither does turning off pressure coupling.

 Here is the .mdp file that works fine, but gives me a segfault when I add the 
 free energy stuff (above):

 constraints = all-bonds
 lincs-iter =  1
 lincs-order =  6
 constraint_algorithm =  lincs
 integrator = sd
 dt = 0.002
 tinit = 0
 nsteps = 10
 nstcomm = 1
 nstxout = 0
 nstvout = 0
 nstfout = 0
 nstxtcout = 500
 nstenergy = 500
 nstlist = 10
 nstlog=0 ; reduce log file size
 ns_type = grid
 vdwtype = cut-off
 rlist = 0.8
 rvdw = 0.8
 rcoulomb = 0.8
 coulombtype = cut-off
 tc_grps =  System
 tau_t   =  1.0
 ld_seed =  -1
 ref_t = 310
 gen_temp = 310
 gen_vel = yes
 unconstrained_start = no
 gen_seed = -1
 Pcoupl = berendsen
 pcoupltype = isotropic
 tau_p = 4
 compressibility = 4.5e-5
 ref_p = 1.0

 I do realize that some of these settings are not ideal for a production run. 
 I started with the real Charmm cutoffs + PME, etc, (which also gives the 
 segfault) but this is what I am using right now for quick testing.

 The only thing keeping me from filing a redmine issue is that if I remove my 
 drug and do the FEP on one of the water molecules (using the FEP code listed 
 above), I have no segfault. Therefore it is clearly related to the drug, 
 whose parameters I built so I may have caused the problem somehow. 
 Nevertheless, the drug runs fine in water and in vacuum without the FEP code, 
 so I can't imagine what could be causing this segfault (also, the fact that 
 it's a segfault means that I don;t get any useful info from mdrun as to what 
 might be going wrong).

 Thank you,
 Chris.
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Multi-level parallelization: MPI + OpenMP

2013-07-18 Thread Éric Germaneau

Dear all,

I'm note a gromacs user,  I've installed gromacs 4.6.3 on our cluster 
and making some test.

Each node of our machine has 16 cores and 2 GPU.
I'm trying to figure how to submit efficient multiple nodes LSF jobs 
using the maximum of resources.
After reading the documentation 
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Locking_threads_to_physical_cores 
on Acceleration and parallelization I got confused and inquire some help.

I'm just wondering whether someone with some experiences on this matter.
I thank you in advance,

Éric.

--
/Be the change you wish to see in the world
/ --- Mahatma Gandhi ---

Éric Germaneau http://hpc.sjtu.edu.cn/index.htm

Shanghai Jiao Tong University
Network  Information Center
room 205
Minhang Campus
800 Dongchuan Road
Shanghai 200240
China

View Éric Germaneau's profile on LinkedIn 
http://cn.linkedin.com/pub/%C3%A9ric-germaneau/30/931/986


/Please, if possible, don't send me MS Word or PowerPoint attachments
Why? See: http://www.gnu.org/philosophy/no-word-attachments.html/

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread bipin singh
Hello all,

I was using g_hbond to calculate H-bonds for a trajectory made from several
individual snapshots from MD simulation, but because this trajectory does
not have the coordinates/information for simulation box, g_hbond is giving
the following error:

Fatal error:
Your computational box has shrunk too much.
g_hbond_mpi can not handle this situation, sorry.


Please let me know, if there is any way to rectify this error.


-- 
*---
Thanks and Regards,
Bipin Singh*
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread David van der Spoel

On 2013-07-19 06:26, bipin singh wrote:

Hello all,

I was using g_hbond to calculate H-bonds for a trajectory made from several
individual snapshots from MD simulation, but because this trajectory does
not have the coordinates/information for simulation box, g_hbond is giving
the following error:

Fatal error:
Your computational box has shrunk too much.
g_hbond_mpi can not handle this situation, sorry.


Please let me know, if there is any way to rectify this error.



you can add a box to your trajectory using trjconv.

--
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell  Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] g_hbond for trajectory without having box information

2013-07-18 Thread bipin singh
Thanks a lot Prof. David. I will try this.


On Fri, Jul 19, 2013 at 10:45 AM, David van der Spoel
sp...@xray.bmc.uu.sewrote:

 On 2013-07-19 06:26, bipin singh wrote:

 Hello all,

 I was using g_hbond to calculate H-bonds for a trajectory made from
 several
 individual snapshots from MD simulation, but because this trajectory does
 not have the coordinates/information for simulation box, g_hbond is giving
 the following error:

 Fatal error:
 Your computational box has shrunk too much.
 g_hbond_mpi can not handle this situation, sorry.


 Please let me know, if there is any way to rectify this error.


  you can add a box to your trajectory using trjconv.

 --
 David van der Spoel, Ph.D., Professor of Biology
 Dept. of Cell  Molec. Biol., Uppsala University.
 Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
 sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se
 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/**mailman/listinfo/gmx-usershttp://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at http://www.gromacs.org/**
 Support/Mailing_Lists/Searchhttp://www.gromacs.org/Support/Mailing_Lists/Searchbefore
  posting!
 * Please don't post (un)subscribe requests to the list. Use the www
 interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read 
 http://www.gromacs.org/**Support/Mailing_Listshttp://www.gromacs.org/Support/Mailing_Lists




-- 
*---
Thanks and Regards,
Bipin Singh*
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists