[gmx-users] Gibbs free energy

2018-03-05 Thread dina dusti
Dear Gromacs users
I have one question about obtaining of Gibbs free energy. I have one 
micellization system in NPT ensamble. Can I obtain the Gibbs free energy by 
G=H-TS? When H is obtained from g_energy and selection of enthalpy and S is 
obtained from g_covar+g_anaeig.May I ask you to guide me, Please?
Dina
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Gibbs free energy

2018-03-05 Thread dina dusti
Dear Gromacs users
I have one question about obtaining of Gibbs free energy. I have one 
micellization system in NPT ensamble. Can I obtain the Gibbs free energy by 
G=H-TS? When H is obtained from g_energy and selection of enthalpy and S is 
obtained from g_covar+g_anaeig.May I ask you to guide me, Please?
Dina
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Minimal PCI Bandwidth for Gromacs and Infiniband?

2018-03-05 Thread Daniel Bauer
Hello,

In our group, we have multiple identical Ryzen 1700x / Nvidia GeForce
1080 GTX computing nodes and think about interconnecting them via
InfiniBands.

Does anyone have Information on what Bandwidth is required by GROMACS
for communication via InfiniBand (MPI + trajectory writing) and how it
scales with the number of nodes?

The mainboards we are currently using can only run one PCIe slot with 16
lanes. When using both PICe slots (GPU+InfiniBand), they will run in
dual x8 mode (thus bandwidth for both GPU and InfiniBand will be reduced
to 8 GB/s instead of 16 GB/s). Now we wonder if the reduced bandwidth
will hurt GROMACS performance due to bottlenecks in GPU/CPU
communication and/or communication via InfiniBand. If this is the case,
we might have to upgrade to new mainboards with dual x16 support.


Best regards,

Daniel

-- 
Daniel Bauer, M.Sc.

TU Darmstadt
Computational Biology & Simulation
Schnittspahnstr. 2
64287 Darmstadt
ba...@cbs.tu-darmstadt.de

Don't trust atoms, they make up everything.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Additional Ph.D. and Post-doc positions in MD simulations of enzymes

2018-03-05 Thread Jan Brezovsky

Dear colleagues,

Unexpectedly, we have additional openings for 1 Ph.D. student and 1
Post-doc in our laboratory starting in October 2018 - for details please
see:http://labbit.eu/2018/02/23/
Thank you and sincere apology to those not interested.


Best regards,
Jan

--
__

Jan Brezovsky, Ph.D.
Prof. UAM & IIMCB
Laboratory of Biomolecular Interactions and Transport
__

Institute of Molecular Biology and Biotechnology
Faculty of Biology
Adam Mickiewicz University
Umultowska 89, 61-614 Poznan, Poland

and

International Institute of Molecular and Cell Biology
Trojdena 4, 02-109 Warsaw, Poland

phone: +48 61 829 5839
e-mails: jan...@amu.edu.pl, jbrezov...@iimcb.gov.pl
http://labbit.eu
__

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] problems with the output of pullx

2018-03-05 Thread Alfredo E. Cardenas
Hi all,
I want to update my own post for any user that could have similar issues in the 
future. The issue that I described before (large spikes were observed in the 
values reported in pullx file but when i calculated the same restrained 
distance using “gmx distance” no such spikes were observed). The problem was 
the pbcatom (reference atom for the treatment of PBC inside a group). I thought 
I didn’t have that problem because I was only restraining the z coordinate and 
the peptide I was pulling inside the membrane never reach near the walls in the 
z direction. The problem was with the pbcatom chosen for the membrane group. 
The pbcatom that was chosen by gromacs was a hydrogen in the choline group 
region and that certainly increases the possibility that some lipids in the 
other layer move to the wrong side of the box and create havoc during the 
pulling calculation. Once I explicitly assigned a different pbcatom in the mdp 
(for example, the terminal methyl carbon of one of the lipids), the spikes in 
the pullx file don’t show up anymore.
By the way, the problem was described in an earlier post:
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2010-April/004198.html
 


Thanks,
Alfredo



> On Feb 24, 2018, at 4:43 PM, alfredo  wrote:
> 
> Hi Mark,
> 
> Thanks for your comment. No, that is not the problem. At that location the 
> center of mass of the peptide is deep inside the membrane, separation between 
> the two pulling groups is 0.4 nm and the dimension of the cell along z is 
> more than 10 nm. I am only pulling along the z direction. The puzzle to me is 
> that when I extract the center of mass separation along z between the same 
> two groups using gmx traj those spikes don't show up at the times when they 
> are shown in the pullx file.
> 
> Alfredo
> 
> 
> 
> On 2018-02-24 11:57, Mark Abraham wrote:
>> Hi,
>> My (thoroughly uneducated) guess is that the spikes are related to the pull
>> distance approaching half of the dimensions of the cell. Not all flavours
>> of pulling can  handle this. Might that be the issue?
>> Mark
>> On Sat, Feb 24, 2018, 17:55 alfredo  wrote:
>>> Hi,
>>> Updating my post. The problem has been observed in two different machine
>>> systems (the latest I have found the problem was the skylake nodes in
>>> tacc). I assumed it has to be some communication bug of coordinates and
>>> forces in the pull part of the code. Probably observed in my case
>>> because of the large size of the peptide I am pulling inside the
>>> membrane. For now I am thinking to extract coordinates from the trr file
>>> and from them compute the pulling harmonic forces. But not an ideal
>>> solution.
>>> Thanks
>>> Alfredo
>>> On 2018-02-22 10:17, Alfredo E. Cardenas wrote:
>>> > Hi,
>>> > I am using gromacs to get the PMF of a peptide of  about 20 amino
>>> > acids, moving inside of a bilayer membrane. After pulling the peptide
>>> > inside the membrane now I am using  pull-coord1-type = umbrella
>>> > and pull-coord1-geometry  = distance to sample configurations in each
>>> > window for the umbrella simulations along the z axis (axis
>>> > perpendicular to the membrane surface). Runs finish ok, no error
>>> > messages. The problem is that when I looked at the contents of the
>>> > pullx file I observed spikes (up to 5 or more Angstroms) in the z
>>> > coordinate separating the center of mass of the peptide from the
>>> > membrane center. But when I extract the z coordinates of the center of
>>> > mass of the two groups and compute the difference the values look
>>> > reasonable with no spikes.
>>> >
>>> > Here an example (it starts good):
>>> >   time (ps)   from pullx  from traj analysis
>>> >
>>> >20.000  0.475923002  0.475919992
>>> >200010.000  0.498394012  0.498389989
>>> >200020.000  0.527589977  0.527589977
>>> >200030.000  0.491834015  0.493739992
>>> >200040.000  0.485377997  0.485379994
>>> >200050.000  0.488474995  0.488469988
>>> >200060.000  0.507991016  0.507990003
>>> >200070.000  0.475095987  0.475100011
>>> >200080.000  0.465889990  0.465889990
>>> >200090.000  0.515878975  0.515879989
>>> >200100.000  0.501435995  0.501429975
>>> >200110.000  0.505191982  0.505190015
>>> >
>>> > Here a bad section:
>>> >
>>> >214000.000  0.427343011  0.601450026
>>> >214010.000  0.484564990  0.545799971
>>> >214020.000  0.530139029  0.603110015
>>> >214030.000  0.176231995  0.650319993
>>> >214040.000  0.342045009  0.637109995
>>> >214050.000  0.181202993  0.636659980
>>> >214060.000  0.338808000  0.595300019
>>> >214070.000  0.442301005  0.547529995
>>> >

[gmx-users] problem with inflategro

2018-03-05 Thread kordzadeh
Hi all

when I used inflategro in KALP in DPPC tutorial, I had no problem,

but in my system I have a receptor in bilayer , I have 1152 DPPC when I use 
inflategro in systeminflated.gro file some of lines is duplicated, this happens 
for 200 DPPC, as follow:

35DPPC  0  1716.798  16.173   4.117

   35DPPCC1 1701  20.679  15.409   8.337

   35DPPC  01717.798  16.343   4.156

   35DPPCC2 1702  20.849  15.448   8.504

   35DPPC  01718.798  16.296   4.317

   35DPPCC3 1703  20.802  15.609   8.336

   35DPPC  01719.798  16.306   4.173

   35DPPCN4 1704  20.812  15.465   8.363

  

what is wrong? what can I do? should I clear copied lines manually?

Thanks in advance

Regards

Azadeh

-- 
This email was Anti Virus checked by  Security Gateway.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Parmbsc1 force-field

2018-03-05 Thread Dan Gil
Hi Dr. Lindahl,

Thanks for putting the force-field together! It is helping my research very
much. Right now I am just checking over everything to make sure I didn't
make any silly mistakes, and that everything is justified.

I might be just confusing myself, but the parmbsc1 paper is citing this
paper (http://aip.scitation.org/doi/pdf/10.1063/1.466363) for the Na+
parameters... whereas the citation for the force-field you put together is
this one (https://pubs.acs.org/doi/abs/10.1021/ja00131a018). They have
different values for the 12-6 sigma and epsilon.

Same authors, but the one you chose is a newer work than the other one. Is
that why you chose the parameters from this paper?

Best Regards,

Dan

On Mon, Mar 5, 2018 at 8:32 AM, Viveca Lindahl 
wrote:

> Hi Dan,
>
> I'm the author together (with Alessandra Villa). I hope it helps others
> providing the parameters on the website, but as Mark said, it's up to you
> to double-check it. If you do find actual errors, I'm interested in hearing
> about it :)
>
> --
> Viveca
>
>
> On Fri, Mar 2, 2018 at 5:19 PM, Dan Gil  wrote:
>
> > Hello, update here.
> >
> > I think there is a possibility that the parmbsc1 force-field updated on
> the
> > gromacs website has some incorrect values.
> >
> > In the parmbsc1 paper (https://www.nature.com/articles/nmeth.3658.pdf)
> > they
> > say they use Na+ parameters from this paper (
> > http://aip.scitation.org/doi/pdf/10.1063/1.466363).
> >
> > .sigma (Å)epsilon (kcal/mol)
> > Na+   2.350  0.1300
> >
> > Here is what I find in the GROMACS force-field.
> >
> > .sigma (nm)epsilon (kJ/mol)
> > Na+   0.25840.4184
> >
> > I would like to directly contact the author, but I have no means at the
> > moment.
> >
> > Best Regards,
> >
> > Dan
> >
> > On Thu, Mar 1, 2018 at 7:00 PM, Dan Gil  wrote:
> >
> > > Hi,
> > >
> > > I am using the parmbsc1 force-field (http://www.gromacs.org/@api/d
> > > eki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
> > > original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
> > > from, but I am having trouble finding them.
> > >
> > > The Amber17 manual suggests that this paper (
> > https://pubs.acs.org/doi/pdf/
> > > 10.1021/ct500918t) is the source for monovalent ions. But, the values
> > > from the GROMACS parmbsc1 force-field (ffnonbonded.itp) does not match
> > the
> > > values from the paper, I think.
> > >
> > > Could you point me to the right direction? Citing the original paper is
> > > something important to me, but I have apparently hit a dead end.
> > >
> > > Best Regards,
> > >
> > > Dan Gil
> > > PhD Student
> > > Department of Chemical and Biomolecular Engineering
> > > Case Western Reserve University
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Parmbsc1 force-field

2018-03-05 Thread Viveca Lindahl
Hi Dan,

I'm the author together (with Alessandra Villa). I hope it helps others
providing the parameters on the website, but as Mark said, it's up to you
to double-check it. If you do find actual errors, I'm interested in hearing
about it :)

--
Viveca


On Fri, Mar 2, 2018 at 5:19 PM, Dan Gil  wrote:

> Hello, update here.
>
> I think there is a possibility that the parmbsc1 force-field updated on the
> gromacs website has some incorrect values.
>
> In the parmbsc1 paper (https://www.nature.com/articles/nmeth.3658.pdf)
> they
> say they use Na+ parameters from this paper (
> http://aip.scitation.org/doi/pdf/10.1063/1.466363).
>
> .sigma (Å)epsilon (kcal/mol)
> Na+   2.350  0.1300
>
> Here is what I find in the GROMACS force-field.
>
> .sigma (nm)epsilon (kJ/mol)
> Na+   0.25840.4184
>
> I would like to directly contact the author, but I have no means at the
> moment.
>
> Best Regards,
>
> Dan
>
> On Thu, Mar 1, 2018 at 7:00 PM, Dan Gil  wrote:
>
> > Hi,
> >
> > I am using the parmbsc1 force-field (http://www.gromacs.org/@api/d
> > eki/files/260/=amber99bsc1.ff.tgz) in GROMACS. I am looking for the
> > original paper where the Na+ and Cl- ion 12-6 Lennard-Jones are coming
> > from, but I am having trouble finding them.
> >
> > The Amber17 manual suggests that this paper (
> https://pubs.acs.org/doi/pdf/
> > 10.1021/ct500918t) is the source for monovalent ions. But, the values
> > from the GROMACS parmbsc1 force-field (ffnonbonded.itp) does not match
> the
> > values from the paper, I think.
> >
> > Could you point me to the right direction? Citing the original paper is
> > something important to me, but I have apparently hit a dead end.
> >
> > Best Regards,
> >
> > Dan Gil
> > PhD Student
> > Department of Chemical and Biomolecular Engineering
> > Case Western Reserve University
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] 2018: large performance variations

2018-03-05 Thread Szilárd Páll
Hi,

Please keep the conversation on the gmx-users list.

On Sun, Mar 4, 2018 at 2:58 PM, Michael Brunsteiner 
wrote:

>
> also: in the meantime i tried "-notunepme -dlb yes" and this gave me in
> all cases
> i tried so far a performance comparable to the best performance with
> tunepme.
> in fact i do not quite understand why "dlb yes" (instead of tunepme) is
> not the default
> setting, or does dlb come with such a large overhead?
>

"DLB" and "PME tuning" a two entirely different things:
- DLB is the domain-decomposition dynamic load balancing (scaled domain
size to balance load among ranks)
- "PME tuning" is load balancing between the short- and long-range
electrostatics (i.e. shifting work from CPU to GPU or from PME ranks to PP
ranks).


> ps: different topic: i assume you have some experience with graphics cards.
> I just bought four new GTX 1060, and did memtestG80 ... turns out at least
> one, probably two, of the 4 cards have damaged memory ... i wonder am i
> just
> unlucky, or was this to be expected? also, in spite of memtestG80 showing
> errors Gromacs seems to run without hickups using these cards ...
> does this mean i can ignore the errors reported by memtestG80?
>

Just because mdrun does not crash or seemingly produces correct results, I
would not trust cards that produce errors. GROMACS generally uses a fairly
small amount of GPU memory and puts only a moderate load on the GPU memory.
That's not to say however that you won't get corruption when you e.g. run
for longer, warm the GPUs up more, etc.

I'd strongly suggest doing a thorough burn-in test and avoid using GPUs
that are known to be unstable.
I'd also recommend the cuda-memtest tool (instead of the AFAIK
outdated/unmaintained memtestG80).

Cheers,
--
Szilárd




>
>
>
> === Why be happy when you could be normal?
>
>
> --
> *From:* Szilárd Páll 
> *To:* Discussion list for GROMACS users ; Michael
> Brunsteiner 
> *Sent:* Friday, March 2, 2018 7:29 PM
> *Subject:* Re: [gmx-users] 2018: large performance variations
>
> BTW, we have considered adding a warmup delay to the tuner, would you be
> willing to help testing (or even contributing such a feature)?
>
> --
> Szilárd
>
> On Fri, Mar 2, 2018 at 7:28 PM, Szilárd Páll 
> wrote:
>
> Hi Michael,
>
> Can you post full logs, please? This is likely related to a known issue
> where CPU cores (and in some cases GPUs too) may take longer to clock up
> and get a stable performance than the time the auto-tuner takes to do a few
> cycles of measurements.
>
> Unfortunately we do not have a good solution for this, but what you can do
> make runs more consistent is:
> - try "warming up" the CPU/GPU before production runs (e.g. stress -c or
> just a dummy 30 sec mdrun run)
> - repeat the benchmark a few times, see which cutoff / grid setting is
> best, set that in the mdp options and run with -notunepme
>
> Of course the latter may be too tedious if you have a variety of
> systems/inputs to run.
>
> Regarding tune_pme: that issue is related to resetting timings too early
> (for -resetstep see mdrun -h -hidden); not sure if we have a fix, but
> either way tune_pme is more suited for parallel runs' separate PME rank
> count tuning.
>
> Cheers,
>
> --
> Szilárd
>
> On Thu, Mar 1, 2018 at 7:11 PM, Michael Brunsteiner 
> wrote:
>
> Hi,I ran a few MD runs with identical input files (the SAME tpr file. mdp
> included below) on the same computer
> with gmx 2018 and observed rather large performance variations (~50%) as
> in:
> grep Performance */mcz1.log7/mcz1.log:Performan ce:   98.510
> 0.244
>
> 7d/mcz1.log:Performance:  140.7330.171
> 7e/mcz1.log:Performance:  115.5860.208
> 7f/mcz1.log:Performance:  139.1970.172
>
> turns out the load balancing effort that is done at the beginning gives
> quite different results:
> grep "optimal pme grid" */mcz1.log
> 7/mcz1.log:  optimal pme grid 32 32 28, coulomb cutoff 1.394
> 7d/mcz1.log:  optimal pme grid 36 36 32, coulomb cutoff 1.239
> 7e/mcz1.log:  optimal pme grid 25 24 24, coulomb cutoff 1.784
> 7f/mcz1.log:  optimal pme grid 40 36 32, coulomb cutoff 1.200
>
> next i tried tune_pme as in:gmx tune_pme -mdrun 'gmx mdrun' -nt 6 -ntmpi 1
> -ntomp 6 -pin on -pinoffset 0 -s mcz1.tpr  -pmefft cpu -pinstride 1 -r 10
> which didn't work ... in some log file it says:Fatal error:
> PME tuning was still active when attempting to reset mdrun counters at step
> 1500. Try resetting counters later in the run, e.g. with gmx mdrun
> -resetstep.
>
> i found no documentation regarding "-resetstep"  ...
>
> i could of course optimize the PME grid manually but since i plan to run a
> large numberof jobs with different systems and sizes this would be a lot of
> work and if possible i'd like to avoid that.
> is 

[gmx-users] Tabulated potential between atoms and virtual sites

2018-03-05 Thread Thomas Tarenzi
Dear Gromacs users,

I would like to use a tabulated potential between the C-alphas of a protein and 
the virtual sites corresponding to the center of mass of the water molecules. I 
am also using a tabulated potential to define the interactions between these 
virtual sites of the solvent. In the mdp file I added the following lines:

energygrps = B CA atoms (B is the water virtual site and atoms includes all the 
rest of the system)
energygrp_table = B B B CA

However, I see in the log file that only the file table_B_B.xvg is read, but 
not the file table_B_CA.xvg (and in fact, I see from the simulation that I 
don’t have interactions). On the opposite, if I apply the same table to the 
interactions between CA and atoms of the water molecules (not virtual sites), 
it works. Does it mean that I cannot apply tabulated potentials between atoms 
and virtual sites, or that I should change something in the definition of the 
virtual site?

They are defined in an itp file in this way:

[ moleculetype ]
; molname  nrexcl
SOL 2

[ atoms ]
; idat type res nr  residu name at name
1 ow1  SOL   ow1-0.8476
2 hw1  SOL   hw1   1 0.4238
3 hw1  SOL   hw2   1 0.4238
4 B 1  SOL   B 2 0.

[ virtual_sites3 ]
; Site from funct a d
4 1 2 3 1 0.05595E+00 0.05595E+00

#ifdef FLEXIBLE
[ bonds ]
; i j   funct   length  force.c.
  1 2   1   0.1 345000
  1 3   1   0.1 345000
  
[ angles ]
; i  j  k   funct   angle   force.c.
  2  1  3   1   109.47  383

#else
[ settles ]
; i j   funct   length
1   1   0.1 0.1633
#endif

[ exclusions ]
1   2   3   4
2   1   3   4
3   1   2   4

In the topology I added these lines:

[ atomtypes ]
;name  masschargeptype  C6 C12   
 ow15.99940   -0.8476A  2.617065E-03   2.633456E-06  
 hw1.00800 0.4238A  0.00E+00   0.00E+00
 B0.0 0.V  0.00E+00   0.00E+00

[ nonbond_params ]
; i   j   func c6 c12
 B B  10.00E+00   1.00E+00

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.