sovent accesible surface area?
On Mon, May 18, 2009 at 11:09 AM, Tsjerk Wassenaar tsje...@gmail.comwrote:
stretching != swelling, e.g.
On Mon, May 18, 2009 at 10:46 AM, Bhanu bhanui...@gmail.com wrote:
How about checking radius of gyration???
2009/5/18 Tsjerk Wassenaar tsje...@gmail.com
There is a log attached in the first e-mail.
My first reaction was the same. Why 14 :).
Marius
On Mon, Apr 6, 2009 at 1:10 PM, Justin A. Lemkul jalem...@vt.edu wrote:
Pawan Kumar wrote:
Hi,
You should add 14 cl ions.
Why 14? The number of Cl- ions that are necessary is pretty much
for Gromacs 4.1.
Cheers,
Erik
On Mar 3, 2009, at 8:14 PM, David van der Spoel wrote:
Marius Retegan wrote:
Hello
Since I was unable to get a working version for Gromacs 4.0.4 on a
Itanium 2 machine with the ia64 nonbonded kernel, I was wondering what
would be the lost in speed if I would
Yes
On Wed, Mar 4, 2009 at 3:45 PM, David van der Spoel
sp...@xray.bmc.uu.se wrote:
Marius Retegan wrote:
Hi,
Thank you for the suggestions. I've managed to get the version 4.0.4
compiled by using --enable-fortran and --disable-ia64-asm.
Nevertheless some of the kernel tests still fail
Hi Jacopo,
Find the file qm_cpm.c in src/mdlib and add your parameters at line
353. You will need some parameters for zinc.
I've used the following parameters, but I didn't test the influence on
the obtained results:
case 30: {
strncpy(qmmm_data-atomdata[i].atomstr,Zn,
Hello
Since I was unable to get a working version for Gromacs 4.0.4 on a
Itanium 2 machine with the ia64 nonbonded kernel, I was wondering what
would be the lost in speed if I would disable the assembly loops?
Thanks,
Marius
___
gmx-users mailing list
Spoel
sp...@xray.bmc.uu.se wrote:
Marius Retegan wrote:
Hello
Since I was unable to get a working version for Gromacs 4.0.4 on a
Itanium 2 machine with the ia64 nonbonded kernel, I was wondering what
would be the lost in speed if I would disable the assembly loops?
About a factor of two
Hello,
I have a comment regarding the Gromacs manual. For equation (3.62) in
Chapter 3.8, \epsilon_i is called the friction constant. This is incorrect
since in that equation \epsilon is actually collision frequency and should
be represented with the letter \gamma. The product of the collision
How did you compile Gromacs? Did you use the --with-qmmm-cpmd flag?
On Feb 6, 2008 6:35 PM, [EMAIL PROTECTED] wrote:
Hi
I tried to use the cpmd-gmx interface with the files in the
qmmm-examples folder. I'm able to run grompp but when i Try to run
mdrun i can get this message: CPMD
I had similar memory related problems. Try compiling Gromacs with no
optimization and rerun the grompp.
On Feb 7, 2008 7:45 AM, David van der Spoel [EMAIL PROTECTED] wrote:
Chris Neale wrote:
I have a large system of 0.7 million atoms. This system runs fine on
opterons with 4GB of ram.
Please give us more detail about your problem. Try posting the input files
for you calculation.
Did you test the to programs independently?
On Feb 3, 2008 2:29 PM, [EMAIL PROTECTED] wrote:
Hi, I'm trying to use the CPMD/GROMACS qm-mm interface but when i try to
run one of the example h20 dimer
send to the list a output of you
calculation.
Marius Retegan
On Jan 18, 2008 12:25 PM, Andrey V Golovin [EMAIL PROTECTED]
wrote:
Dear all,
We successfully passed all examples in GMX-CPMD and some other stuff with
amberff with common atoms, but since we trying to deal with K+ (potassium)
in QM
on a IBM machine, and if everything runs
smoothly start using optimizations for the compilers and test again.
With respect
Marius Retegan
On Oct 31, 2007 4:36 PM, Marius Retegan [EMAIL PROTECTED] wrote:
I have aprox. 81000 atoms. The system worked on a Itanium 2 cluster.
On the IBM machine I've
0.1150
NE2opls_511 -0.564-0.4900
Copls_2350.500 0.5000
Oopls_236 -0.500-0.5000
Thank you
Marius Retegan
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org
Problem solved.
You where right. The charges in Gromacs are an update version of OPLS/AA.
I've compared them with parameters from Impact and they are the same.
Sorry for the inconvenience.
Marius Retegan
On Nov 13, 2007 2:09 PM, Mark Abraham [EMAIL PROTECTED] wrote:
Marius Retegan wrote:
Hello
32 Gb on each node of the cluster.
Maybe I should add that I've also ran CPMD and cp2k jobs on the
cluster but I've never had memory problems.
Marius Retegan
On 10/30/07, David van der Spoel [EMAIL PROTECTED] wrote:
Marius Retegan wrote:
Dear Gromacs users
I'm having some troubles running
I have aprox. 81000 atoms. The system worked on a Itanium 2 cluster.
On the IBM machine I've used the IBM compilers.
I'm going to give it a try with gcc.
Thank you
Marius Retegan
On 10/31/07, David van der Spoel [EMAIL PROTECTED] wrote:
Marius Retegan wrote:
32 Gb on each node of the cluster
?
Thank you
Marius Retegan
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests
.
=
Does anyone have an idea on how to solve this problem?
Any help would be greatly appreciated.
Thank you
Marius Retegan
___
gmx-users mailing listgmx-users@gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search
Hello
I have a problem with Gromacs v3.3.1 installed in my $HOME on a SGI Altix
350 with 10 Itanium 2 64 bits processors runing SuSE Linux. I've tried to
run a QMMM optimization (with CPMD for the QM part) but even before CPMD is
called the memory start to go crazy (I saw the memory usage
Hello
I have a problem with Gromacs v3.3.1 installed in my $HOME on a SGI Altix
350 with 10 Itanium 2 64 bits processors runing SuSE Linux. I've tried to
run a QMMM optimization (with CPMD for the QM part) but even before CPMD is
called the memory start to go crazy (I saw the memory usage
21 matches
Mail list logo