Re: [gmx-users] Videocard selection

2019-03-12 Thread pbuscemi
3500 cores vs 4500 cores and

See 
http://hwbench.com/vgas/geforce-rtx-2080-ti-vs-geforce-gtx-titan-xp 

And

https://gpu.userbenchmark.com/Compare/Nvidia-Titan-Xp-vs-Nvidia-RTX-2080-Ti/m265423vs4027

However, if you must have support, then it must be titan

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Benson Muite
Sent: Tuesday, March 12, 2019 8:57 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Videocard selection

Hi!

For most applications single precision performance is most important - may want 
to check if this will be fine or your workflow.

An older study is at:

https://arxiv.org/pdf/1507.00898.pdf

Regards,

Benson

On 3/12/19 2:18 PM, Никита Шалин wrote:
> Dear Gromacs users,
>
> I would like to buy a videocard for calculation on GPU. I choose between RTX 
> 2080Ti and Titan Xp. Please, tell me which videocard to choose? And what 
> characteristics of the videocard are important for calculating?
>
> I'm doing modeling of copolymer system with an applied electric field.
>
> Thank you for advance!
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Pore size calculation

2019-03-12 Thread pbuscemi
If you actually can model water in the pore, then run a few experiments
changing the VDW radius of the water oxygen.  This will give you the
limiting size of the pore and this is the parameter you actually need to
know.

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of RAHUL
SURESH
Sent: Tuesday, March 12, 2019 9:09 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Pore size calculation

Hi Mr. Paul.
It might not have been clear in my previous mail. Sorry for the
inconvenience.

I have simulated a GPCR membrane protein in POPC system. I need to calculate
the pore size in the protein structure and not in the lipid bilayer.
Precisely i am intended to calculate the water permeation through the
pathway in protein structure. So I want to measure the pore radius of the
protein structure and permeation of water through the pathway.

Thank you


On Tue, Mar 12, 2019 at 6:56 PM  wrote:

> Rahul,
>
> What do you mean by pore size?  A hole in the membrane caused by 
> external tension ?  The space occupied by the protein ? Does lipid 
> occupy the walls of the pore ?
>
> Paul
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>  On Behalf Of RAHUL 
> SURESH
> Sent: Tuesday, March 12, 2019 6:23 AM
> To: gmx-us...@gromacs.org
> Subject: [gmx-users] Pore size calculation
>
> Hi. I have a Protein_POPC bilayer system, simulated for 1000ns.
>
> Is it possible to calculate the pore size of the membrane protein?
> I tried a tool gmx_hole, but ended up with an unnotified error.
>
> Can anyone help me with this?
>
> Thanks in advance
> --
> *Regards,*
> *Rahul *
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>


--
*Regards,*
*Rahul *
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Pore size calculation

2019-03-12 Thread pbuscemi
Rahul,

What do you mean by pore size?  A hole in the membrane caused by external
tension ?  The space occupied by the protein ? Does lipid occupy the walls
of the pore ?

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of RAHUL
SURESH
Sent: Tuesday, March 12, 2019 6:23 AM
To: gmx-us...@gromacs.org
Subject: [gmx-users] Pore size calculation

Hi. I have a Protein_POPC bilayer system, simulated for 1000ns.

Is it possible to calculate the pore size of the membrane protein?
I tried a tool gmx_hole, but ended up with an unnotified error.

Can anyone help me with this?

Thanks in advance
-- 
*Regards,*
*Rahul *
-- 
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Itp for a longer molecule out of a shorter one

2019-03-06 Thread pbuscemi
Alex,

Having the itp for the shorter molecule you have most of what you need. Use
x2top to create the top file for the longer molecule. Adjust, if necessary,
the atomname2type.n2t file  in the ff  file to create any necessary atom
types being sure to select the proper ff.  Charges, bond lengths can be
taken from the existing pdb and itp when needed.  Use Avogadro for a quick
reference to model parameters.  I've made various models of Pebax , nylon to
100k's MW using this method.

Also ATB can get the itp for polymers up to 600 atoms if you use gromos54a7
ff. 

Hope this helps

Paul Buscemi, Ph.D.
UMN BICB

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Alex
Sent: Wednesday, March 06, 2019 2:54 PM
To: gmx-us...@gromacs.org
Subject: [gmx-users] Itp for a longer molecule out of a shorter one

Dear all,
I have the itp file for a molecule (OH-[PPE]1-[PPO]2-[PPE]1-H   it is a
short surfactant), out of that itp, I am trying to create an itp file for a
longer molecule in the form of OH-[PPE]2-[PPO]16-[PPE]2-H where the PPE and
PPO parts are being repeated 2 and 16 times each. For each extra PPO, 10
atoms, 10 bonds 24 pairs, 19 angles and 4 dihedral entries would be added to
the itp file. Doing that for a longer molecule is so tedious, so, I wonder
if anybody has already a script or tools for doing that?
I would be really appreciated.
Regards,
Alex
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance of GROMACS on GPU's on ORNL Titan?

2019-02-13 Thread pbuscemi
For what it is worth: on our AMD 2990wd 32 core,  2 x 2080ti we can run 100k
atoms at ~ 100ns/day NVT , ~ 150 ns/day NPT so 8 -10 days to get that
microsecond.   I'm curious to learn what kind of results you might obtain
from Oak Ridge and if the cost/clock time analysis makes it worthwhile.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Michael
Shirts
Sent: Wednesday, February 13, 2019 1:28 PM
To: Discussion list for GROMACS users ; Michael R
Shirts 
Subject: [gmx-users] Performance of GROMACS on GPU's on ORNL Titan?

Does anyone have experience running GROMACS on GPU's on Oak Ridge National
Labs Titan or Summit machines, especially parallelization over multiple
GPUs? I'm looking at applying for allocations there, and am interested in
experiences that people have had. We're probably mostly looking at systems
in the 100-200K atoms range, but we need to get to long timescales (multiple
microseconds, at least) for some of the phenomena we are looking at.

Thanks!
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-31 Thread pbuscemi


-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Thursday, January 31, 2019 7:06 AM
To: Discussion list for GROMACS users 
Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA

On Wed, Jan 30, 2019 at 5:14 PM  wrote:
>
> Vlad,
>
> 390 is an 'old' driver now.  Try something simple like installing CUDA 410.x 
> see if that resolves the issue.  if you need to update the compiler, g++ -7 
> may not work, but g++ -6 does.

It is worth checking compatibility first. The GROMACS log file notes the CUDA 
driver compatibility version and that has to be >= than the CUDA toolkit 
version.

> Do NOT  install the video driver from the CUDA toolkit however.  If 
> necessary, do that separately from the PPA repository.

Why not? I'd prefer if we avoid strong advise on the list without an 
explanation and without ensuring that it is the best advice for all use-cases.


 Not certain who responded but your comments are well taken and I apologize for 
the dirth of information. If you use the driver installation from the CUDA 
toolkit, it will remove your current- a probably newer - driver and force you 
to go through a rather arduous process of  creating the blacklist for the 
Nouveau Driver,  temporarily termining Xwindows  etc to install the driver,   
see  for example 
https://gist.github.com/wangruohui/df039f0dc434d6486f5d4d098aa52d07   It far 
far easier to use the PPA repository 


The driver that comes with a CUDA toolkit may often be a bit old, but there is 
little reason to not use it and you can always download a slightly newer 
version from the same series (e.g. the CUDA 9.2 toolkit came with 396.26 but 
the latest version available from the same series is 396.54) from the official 
website:
https://www.nvidia.com/Download/index.aspx

When you search on the above site it generally spits out the latest version 
compatible with the hardware and OS selected, but if you want to stick to the 
same series, you can always get a full list of supported drivers under the 
"Beta and Older Drivers" link.

My experience with many systems, lots of CUDA installs and versions is this:
As long as you use one and only one source for your drivers, no matter which 
one you pick in the majority of the cases it just works (as long as you use a 
compatible CUDA toolkit).
If you install from a repository, keep using that and do _not_ try to install 
from another source (be it another repo or the binary blobs) without fully 
uninstalling first. Same goes for the binary blob
drivers: upgrading from one version to the other of using the NVIDIA binary 
installer is generally fine; however, especially if you are downgrading or want 
to install from a repository always run the "nvidia-uninstall" script first.


> Paul
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
>  On Behalf Of 
> Benson Muite
> Sent: Wednesday, January 30, 2019 10:05 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA
>
> Hi,
>
> Do you get the same build errors with Gromacs 2019?
>
> What operating system are you using?
>
> What GPU do you have?
>
> Do  you have a newer version of version of GCC?
>
> Benson
>
> On 1/30/19 5:56 PM, Владимир Богданов wrote:
> HI,
>
> Yes, I think, because it seems to be working with nam-cuda right now:
>
> Wed Jan 30 10:39:34 2019
> +-+
> | NVIDIA-SMI 390.77 Driver Version: 390.77
> |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC 
> |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. 
> |
> |===+==+==|
> |   0  TITAN XpOff  | :65:00.0  On |  N/A 
> |
> | 53%   83CP2   175W / 250W |   2411MiB / 12194MiB | 47%  Default 
> |
> +---+--+--+
>
> +-+
> | Processes:   GPU Memory 
> |
> |  GPU   PID   Type   Process name Usage  
> |
> |=|
> |0  1258  G   /usr/lib/xorg/Xorg40MiB 
> |
> |0  1378  G   /usr/bin/gnome-shell  15MiB 
> |
> |0  7315  G   /usr/lib/xorg/Xorg   403MiB 
> |
> |0  7416  G   /usr/bin/gnome-shell 284MiB 
> |
> |0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB 
> |
> |0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB 

Re: [gmx-users] (no subject)

2019-01-31 Thread pbuscemi
Run the tutorials

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Satya
Ranjan Sahoo
Sent: Wednesday, January 30, 2019 11:19 PM
To: gmx-us...@gromacs.org
Subject: [gmx-users] (no subject)

Sir,
I am a beginner to GROMACS. I was unable to understand how to create all the
ions.mdp , md.mdp , mout.mdp , minim.mdp , newbox.mdp , npt.mdp , nvt.mdp ,
porse.itp , topol.top input files for molecular simulation of my molecule.
Please teach me how can I generate or create all the above mentioned input
files for my molecule.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread pbuscemi
Vlad,

390 is an 'old' driver now.  Try something simple like installing CUDA 410.x 
see if that resolves the issue.  if you need to update the compiler, g++ -7 may 
not work, but g++ -6 does.

Do NOT  install the video driver from the CUDA toolkit however.  If necessary, 
do that separately from the PPA repository.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Benson Muite
Sent: Wednesday, January 30, 2019 10:05 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA

Hi,

Do you get the same build errors with Gromacs 2019?

What operating system are you using?

What GPU do you have?

Do  you have a newer version of version of GCC?

Benson

On 1/30/19 5:56 PM, Владимир Богданов wrote:
HI,

Yes, I think, because it seems to be working with nam-cuda right now:

Wed Jan 30 10:39:34 2019
+-+
| NVIDIA-SMI 390.77 Driver Version: 390.77|
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  TITAN XpOff  | :65:00.0  On |  N/A |
| 53%   83CP2   175W / 250W |   2411MiB / 12194MiB | 47%  Default |
+---+--+--+

+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|0  1258  G   /usr/lib/xorg/Xorg40MiB |
|0  1378  G   /usr/bin/gnome-shell  15MiB |
|0  7315  G   /usr/lib/xorg/Xorg   403MiB |
|0  7416  G   /usr/bin/gnome-shell 284MiB |
|0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12696  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12737  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12810  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12868  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 20688  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |
+-+

After unsuccesful gromacs run, I ran namd

Best,

Vlad


30.01.2019, 10:59, "Mark Abraham" 
:

Hi,

Does nvidia-smi report that your GPUs are available to use?

Mark

On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
mailto:bogdanov-vladi...@yandex.ru>>
wrote:


 Hey everyone!

 I need help, please. When I try to run MD with GPU I get the next error:

 Command line:

 gmx_mpi mdrun -deffnm md -nb auto



 Back Off! I just backed up md.log to ./#md  
.log.4#

 NOTE: Detection of GPUs failed. The API reported:

 GROMACS cannot run tasks on a GPU.

 Reading file md.tpr, VERSION 2018.2 (single precision)

 Changing nstlist from 20 to 80, rlist from 1.224 to 1.32



 Using 1 MPI process

 Using 16 OpenMP threads



 Back Off! I just backed up md.xtc to ./#md  
.xtc.2#



 Back Off! I just backed up md.trr to ./#md  
.trr.2#



 Back Off! I just backed up md.edr to ./#md  
.edr.2#

 starting mdrun 'Protein in water'

 3000 steps, 6.0 ps.

 I built gromacs with MPI=on and CUDA=on and the compilation process looked  
good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it  
doesn't work.

 Information from *.log file:

 GROMACS version: 2018.2

 Precision: single

 Memory model: 64 bit

 MPI library: MPI

 OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

 GPU support: CUDA

 SIMD instructions: AVX_512

 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

 RDTSCP usage: enabled

 TNG support: enabled

 Hwloc support: disabled

 Tracing support: disabled

 Built on: 2018-06-24 02:55:16

 Built by: vlad@vlad [CMAKE]

 Build OS/arch: Linux 4.13.0-45-generic x86_64

 Build CPU vendor: Intel

 Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz

 Build CPU family: 6 Model: 85 Stepping: 4

 Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl  
clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid  
pclmuldq pdcm pdpe1gb popcnt pse 

Re: [gmx-users] how can I get two parallel lysin in the same box?

2019-01-29 Thread pbuscemi
Giuseppe,

Use Avogadro, import or construct the four ( if I understand your model )
molecules in the desired orientations.  Save as a pdb.  Used editconf to
construct the box.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Giuseppe R
Del Sorbo
Sent: Tuesday, January 29, 2019 1:15 PM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] how can I get two parallel lysin in the same box?


Dear all,

I am using gromacs 5.1.2 and I am doing simulation with lysin and
surfactant.

Now in the same box I want two lys30 (two identical lysin) oriented once in
parallel and once in orthogonal each other.

I tried using gmx insert-molecules ... but it's a random orientation.

I also tried with -ip file.dat but still I didn't get what I want.

Do you have any suggestions?

Thanks

Best,

Giuseppe
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] modeling evaporation NVP NVE

2019-01-29 Thread pbuscemi
 

Dear Users,

 

I am trying to understand a model of water evaporation and am asking for a
few of suggestions or just point to a link. 

 

So far I have modeled 1 molecules of spce water and equilibrated to a
density near 1 gm/cc.   I then double the size of the box and then using 

p-couple Perinnelo-Rahmen,

Pcouple type = surface-tension   or semiisotropic,

compressibility = 4.5e-5  4.5e-5 ,

ref P  =10   

ref  temp  300 or 400K  and run for 0.5 ns 

 

What occurs is a few molecules from the cube form ( apparently ) a gas
phase. What I expected was the water to expand to fill the enlarged box.  My
run time may be too short so I'll run longer times  later.

But are my expectations incorrect - should not all the water expand ?I
used an mpd type  NPT - should I use NVE  and if so how does that differ
from NPT other than setting tcoupl = no and pcoupl = no.   I can - and will
-  try this but it may be an incorrect approach as well

 

Paul 

 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] multiple GPU usage for simulation

2019-01-29 Thread pbuscemi
I am not expert on this subject but have recently gone through the
exercise...

Firstly, does nvidia-smi indicate both cards are active ?

Secondly,  for the nvt or npt runs  have you tried mdrun commands similar to
: 

mdrun -deffnm  file  -nb gpu  -gpu_id 01
or
mdrun -deffnm  file  -nb gpu -pme  gpu -ntomp 5 -ntmpi 10 -npme 1 -gputasks
1
or
mdrun -deffnm  file  -nb gpu -pme  gpu -ntomp 5 -ntmpi 10 -npme 1 -gpu_id 01

this may help select both 

hope it helps
Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of praveen
kumar
Sent: Tuesday, January 29, 2019 10:59 AM
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] multiple GPU usage for simulation

Dear gromacs users

I am working on molecular simulation using gromacs 2018.4, we have a new gpu
machine which has two GPU cards.

I have new workstation  with 20 cores  (Intel I9 processors) with 3.3 GHz

Nvidia Getforce X1080 Ti cards(2 nos)

I have tried running simulation using the command  gmx mdrun -v -deffnm test

getting this message

Using 1 MPI process
Using 10 OpenMP threads

1 GPU auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this nodes:

It seems simulation runs by make use of one GPU instead of two. I have
checked it using nvidia-smi.

similar error has got already in the given link
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-
users/2018-April/119915.html

I have tried all those options given by Mark and other people, I am
wondering whether this issue cab be solved ? I.e. Make use of two gpu cards
for one simulation run

I would be really thankful to if anyone can help me in this regard.

Thanks
Praveen


--
Thanks & Regards
Dr. Praveen Kumar Sappidi,
National Post Doctoral Fellow.
Computational Nanoscience Laboratory,
Chemical Engineering Department,
IIT Kanpur, India-208016



-- 
Thanks & Regards
Praveen Kumar Sappidi,
National Post Doctoral Fellow.
Computational Nanoscience Laboratory,
Chemical Engineering Department,
IIT Kanpur, India-208016
-- 
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] use of DPOSRES without pdb2gmx

2019-01-17 Thread pbuscemi
Justin,
Thanks,
Bartimaeus

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Justin
Lemkul
Sent: Thursday, January 17, 2019 2:44 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] use of DPOSRES without pdb2gmx



On 1/17/19 3:33 PM, pbusc...@q.com wrote:
> Dear Users,
>
> Suppose you do not use pdb2gmx  and therefore do not use  the -I 
> option for all constraints. Suppose further you do not generate a 
> restraint file for the non-protein molecules in the model.
>
> Then what effect, if any,  does setting constraints = all-bonds or 
> h-bonds have ?

Constraints and restraints are totally different. If you tell mdrun to
constrain bonds, it will do precisely what you tell it.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] use of DPOSRES without pdb2gmx

2019-01-17 Thread pbuscemi
Dear Users,

Suppose you do not use pdb2gmx  and therefore do not use  the -I option for
all constraints. Suppose further you do not generate a restraint file for
the non-protein molecules in the model.

Then what effect, if any,  does setting constraints = all-bonds or h-bonds
have ?  

 

Thanks

Paul

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Results of villin headpiece with AMD 8 core

2019-01-13 Thread pbuscemi
Mirco,

Here are the results for three runs of the million atoms DPPC
===8 core 2700x 1080ti 
==
gmx mdrun -deffnm dppc.md -nb gpu -pme gpu -ntmpi 4 -ntomp 4 -npme 1 -gputasks 

Core t (s) Wall t (s) (%)
Time: 5286.270 330.392 1600.0
(ns/day) (hour/ns)
Performance: 5.126 4.682

==

On Jan 12 2019, at 5:42 pm, paul buscemi  wrote:
>
> Mirco,
> on the modification - nicely done.
> On the system speed, running Maestro-Desmond one core ) the 1080ti is pegged 
> and at usually 90% power. them folks at Schrodinger know what they are doing. 
> So the base speed is apparently sufficient, its some other factor e.g. the 
> work load distribution that is not optimized.
>
> I’ll work with your files tomorrow and let you know how it turns out— thanks
> Have a a great weekend
> Paul
> > On Jan 12, 2019, at 3:11 PM, Wahab Mirco 
> >  wrote:
> > Hi Paul,
> > thanks for your reply.
> > On 11.01.2019 23:20, paul buscemi wrote:
> > > Getting the ion and SOL concentration correct in the top is trickier ( 
> > > for me ) than it should have been, If you happen to reuse both solvate 
> > > and genion during the build keeping track of the top is like using a 
> > > digital rubics cube..! The charge the villin was +1 because after I 
> > > downloaded it from the pdb I removed all other water and ions - it just 
> > > made pdb2gmx easier to work with.
> > >
> >
> > I simply hand-edited the .gro by making up two ions and put them
> > somewhere near the corners and added a short energy minimization.
> > Then, I added one line in the .top for the ions.
> >
> > > The 1080 scaled nicely with the 1080 ti, these are really nice pieces of 
> > > hardware. and you are correct, given the choice of increased processors 
> > > vs faster processors - choose the latter. I have the AMD OC to 4.0 GH and 
> > > it runs the same model almost as fast as as 32 core AMD at 3.7 GHz.
> > Your system is possibly too slow to saturate the 1080Ti at this small
> > system size. In a much larger system, the lead of the 1080 Ti over the
> > 1080 may possibly reach the theoretical expectation.
> >
> > > I've run 300k DPPC models ( ~300 DPPC molecules ) and they run at ~15 
> > > ns/day in NPT. And yes, if you can send the pdb, top, and itps I’t would 
> > > be interesting to compare the two AMDs.
> >
> > I did upload the stuff here + a readme-file. This system is much too
> > large for a single box + GPU (for productive runs), but maybe in 5 years
> > or so we can watch capillary waves through connected IMD/VMD in real-
> > time ;)
> >
> > => http://suwos.gibtsfei.net/d.dppc.4096.zip
> > Regards
> > Mirco
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at 
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
> > a mail to gmx-users-requ...@gromacs.org.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
>

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Results of villin headpiece with AMD 8 core

2019-01-11 Thread pbuscemi


Dear Users,
For those of you considering a workstation build and wonder about AMD 
processors I have the following results using the included npt and log intro 
for the villin headpiece in ~ 8000 atoms spc/e. The npt was run from a similar 
nvt ( 10 steps ) . The best results were achieved with the simplest command 
line - letting Gromacs choose threads.
The system became unstable at dt =0.005 ns step. Note the close correspondence 
between rcoulomb, rvdw and cutoffswitch. Results compare favorably with the 
E5-2690+GTX Titan demo
http://on-demand.gputechconf.com/gtc/2013/webinar/gromacs-kepler-gpus-gtc-express-webinar.pdf
 
(https://link.getmailspring.com/link/1547231722.local-ad2d5ea3-b061-v1.5.2-31660...@getmailspring.com/0?redirect=http%3A%2F%2Fon-demand.gputechconf.com%2Fgtc%2F2013%2Fwebinar%2Fgromacs-kepler-gpus-gtc-express-webinar.pdf=Z214LXVzZXJzQGdyb21hY3Mub3Jn)

Core t (s) Wall t (s) (%)
Time: 112.643 14.080 800.0
(ns/day) (hour/ns)
Performance: 1288.622 0.019

define = -DPOSRES ; position restrain the protein and ligand
; Run parameters
integrator = md ; leap-frog integrator
nsteps = 5 ; 2 * 5 = 100 ps
dt = 0.0042 ; ns
; Output control
nstenergy = 500 ; save energies every 1.0 ps
nstlog = 500 ; update log file every 1.0 ps
nstxout-compressed = 500 ; save coordinates every 1.0 ps
; Bond parameters
continuation = yes ; continuing from NVT
constraint_algorithm = lincs ; holonomic constraints
constraints = h-bonds
lincs_iter = 1 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
lincs-warnangle = 35

; Neighbor searching and vdW
cutoff-scheme = Verlet
ns_type = grid ; search neighboring grid cells
nstlist = 20 ; largely irrelevant with Verlet
rlist = 1.51
vdwtype = cutoff
vdw-modifier = force-switch
rvdw-switch = 1.0
rvdw = 1.1 ; short-range van der Waals cutoff (in nm)

; Electrostatics
coulombtype = PME ; Particle Mesh Ewald for long-range electrostatics
rcoulomb = 1.11
pme_order = 4 ; cubic interpolation
fourierspacing = .12 ; grid spacing for FFT

; Temperature coupling
tcoupl = V-rescale ; modified Berendsen thermostat
tc-grps = Protein Water_and_ions ; two coupling groups - more accurate
tau_t = 0.1 0.1 ; time constant, in ps
ref_t = 300 300 ; reference temperature, one for each group, in K
; Pressure coupling
pcoupl = Berendsen ; pressure coupling is on for NPT
pcoupltype = isotropic ; uniform scaling of box vectors
tau_p = 2.0 ; time constant, in ps
ref_p = 1.0 ; reference pressure, in bar
compressibility = 4.5e-5 ; isothermal compressibility of water, bar^-1
refcoord_scaling = com
; Periodic boundary conditions
pbc = xyz ; 3-D PBC
; Dispersion correction is not used for proteins with the C36 additive FF
DispCorr = no
; Velocity generation
gen_vel = no ; velocity generation off after NVT

=== log ===
GROMACS: gmx mdrun, version 2018.3
Executable: /usr/local/gromacs/bin/gmx
Data prefix: /usr/local/gromacs
Working dir: /home/pb/Desktop/villin
Command line: gmx mdrun -deffnm villin.md5

GROMACS version: 2018.3
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX2_128
FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
Built on: 2018-11-01 17:44:10
Built by: pb@Ryzen [CMAKE]
Build OS/arch: Linux 4.15.0-20-generic x86_64
Build CPU vendor: AMD
Build CPU brand: AMD Ryzen 7 2700X Eight-Core Processor
Build CPU family: 23 Model: 8 Stepping: 2
Build CPU features: aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf 
misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdrnd rdtscp sha 
sse2 sse3 sse4a sse4.1 sse4.2 ssse3
C compiler: /usr/bin/gcc-6 GNU 6.4.0
C compiler flags: -march=core-avx2 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
C++ compiler: /usr/bin/g++-6 GNU 6.4.0
C++ compiler flags: -march=core-avx2 -std=c++11 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright 
(c) 2005-2017 NVIDIA Corporation;Built on Fri_Nov__3_21:07:56_CDT_2017;Cuda 
compilation tools, release 9.1, V9.1.85
CUDA compiler 
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
 
;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver: 10.0
CUDA runtime: 9.10

Running on 1 node with total 8 cores, 16 logical cores, 1 compatible GPU
Hardware detected:
CPU info:
Vendor: AMD
Brand: AMD Ryzen 7 2700X Eight-Core Processor
Family: 23 Model: 8 Stepping: 2
Features: aes amd apic avx avx2 clfsh cmov 

[gmx-users] AMD 32 core TR

2019-01-03 Thread pbuscemi
Dear users,

 

I had trouble getting suitable performance from an AMD 32 core TR.  By
updating  all the cuda drivers and runtime to v10  and using gcc,g++ -6 from
v5  -- I did try gcc-7 but Cuda 10 did not appreciate the attempt  --  and
in particular removing  CUDA v7 runtime.), I was able to improve a 300k atom
nvt run from 8 ns/day to 26 ns/day .  I replicated  as far as possible the
Gromacs ADH benchmark with 137000 atoms-spc/e.  I could achieve an md of
49.5 ns/day. I do not have a firm grasp if this is respectable or not (
comments ? )  but appears at least ok.   The input command was simply mdrun
ADH.md   -nb gpu  -pme gp   ( and not using -ntomp or ntmpi which in my
hands degraded performance ) .   To run the ADH  I replaced the two ZN ions
in  ADH file from PDB ( 2ieh.pdb ) with CA ions  since ZN was not found in
the OPLS data base in using pdb2gmx.

 

The points being ( 1) Gromacs appears reasonably happy with  the 8 core and
32 core Ryzen although ( again in my hands ) for these  smallish systems
there is only about a 10% improvement between the two, and  (2) , as often
suggested in the Gromacs literature, use the latest drivers possible

 

 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Large system

2019-01-01 Thread pbuscemi
As suggested,  compare the number of molecules/atoms implied in your top
file against that in your gro/pdb file.  This is one aspect of Gromacs that
it never gets wrong.

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Bratin
Kumar Das
Sent: Tuesday, January 01, 2019 5:22 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Large system

There must be something wrong in your pdb or gro file. Give the proper error
message. They way you told your problem..no one can understand.

On 01-Jan-2019 11:41 AM, "Anuj Ray"  wrote:

Respected Sir

I am trying to prepare a minimization input file (.tpr) for a system having
153216 atoms but the grompp command is reading only upto 9 atoms. What
is the procedure to prepare the input for such system?

Regards
Anuj
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] error on opening gmx_mpi

2018-12-19 Thread pbuscemi
Thank you  - both - very much  again.

The "mpir_run  -npx  gmx -mdrun." command   was lifted from a Feb 2018
response from Szilard , to a multi gpu, user which he used as an example.

I'll crank on your pointers right now.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Justin
Lemkul
Sent: Wednesday, December 19, 2018 10:47 AM
To: Discussion list for GROMACS users 
Subject: Re: [gmx-users] error on opening gmx_mpi

On Wed, Dec 19, 2018 at 11:44 AM p buscemi  wrote:

> Shi,
>
> reinstalling the mpi version using gmx 18.4 did not helpany ideas
?
> hms@rgb2 ~/Desktop/PVP20k $ mpirun -np 8 mdrun_mpi -deffnm PVP20k1.em
> :-) GROMACS - mdrun_mpi, VERSION 5.1.2 (-:
>
>
You're just calling the same (incorrect) command again. You installed
"gmx_mpi" version 2018.4 but then your command uses "mdrun_mpi" instead.
Apparently you have version 5.1.2 (perhaps from a package manager, based on
the fact that it's installed in /usr/bin instead of /usr/local/gromacs/bin)
that is being found in your PATH.

If you've got multiple versions installed, either source GMXRC properly or
use full PATH information in your commands. Above all, use the right binary
name to start :)

-Justin

-- 

==

Justin A. Lemkul, Ph.D.

Assistant Professor

Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com


==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] using dual CPU's

2018-12-17 Thread pbuscemi
Thank you again,  "I'll be back" when I sort all this out.

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of Szilárd Páll
Sent: Monday, December 17, 2018 1:16 PM
To: Discussion list for GROMACS users 
Subject: Re: [gmx-users] using dual CPU's

On Thu, Dec 13, 2018 at 8:39 PM p buscemi  wrote:

> Carsten
>
> thanks for the suggestion.
> Is it necessary to use the MPI version for gromacs when using multdir? 
> - now have the single node version loaded.
>

Yes.


> I'm hammering out the first 2080ti with the 32 core AMD. results are 
> not stellar. slower than an intel 17-7000 But I'll beat on it some 
> more before throwing in the hammer.
>

The AMD Threadrippers are tricky and the 32-core version is an even unusual 
beast than the 16-core one. Additionally, there can be CPU-GPU communication 
issues that might required the latest firmware updates (and even than with some 
boards systems this aspect is not optimal).

You should definitely avoid single runs that span across more than 8 cores as 
performance with such wide ranks will not be great on the CPU side.
However in throughput mode with GPUs, e.g. using e.g. 4-way multi-runs (or with 
DD) a 16 C Threadripper should match the performance of 12 Skylake cores.



> Paul
>
> Sent from Mailspring (
> https://link.getmailspring.com/link/1544729553.local-d6faf123-7363-v1.
> 5.3-420ce...@getmailspring.com/0?redirect=https%3A%2F%2Fgetmailspring.
> com%2F=Z214LXVzZXJzQGdyb21hY3Mub3Jn),
> the best free email app for work
> On Dec 13 2018, at 4:33 am, Kutzner, Carsten  wrote:
> > Hi,
> >
> > > On 13. Dec 2018, at 01:11, paul buscemi  wrote:
> > > Carsten,THanks for the response.
> > > my mistake - it was the GTX 980 from fig 3. … I was recalling from
> memory….. I assume that similar
> > There we measured a 19 percent performance increase for the 80k atom
> system.
> >
> > > results would be achieved with the 1060’s
> > If you want to run a small system very fast, it is probably better 
> > to
> put in one
> > strong GPU instead of two weaker ones. What you could do with your 
> > two
> 1060, though,
> > is to maximize your aggregate performance by running two (or even 4)
> simulations
> > at the same time using the -multidir argument to mdrun. For the 
> > science,
> probably
> > several independent trajectories are needed anyway.
> > >
> > > No I did not reset ,
> > I would at least use the -resethway mdrun command line switch, this 
> > way your measured performances will be more reliable also for
> shorter runs.
> >
> > Carsten
> > > my results were a compilation of 4-5 runs each under slightly
> different conditions on two computers. All with the same outcome - 
> that is ugh!. Mark had asked for the log outputs indicating some 
> useful conclusions could be drawn from them.
> > > Paul
> > > > On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten 
> wrote:
> > > > Hi Paul,
> > > > > On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
> > > > > Dear users ( one more try )
> > > > > I am trying to use 2 GPU cards to improve modeling speed. The
> computer described in the log files is used to iron out models and am 
> using to learn how to use two GPU cards before purchasing two new RTX 2080 
> ti's.
> The CPU is a 8 core 16 thread AMD and the GPU's are two GTX 1060; 
> there are
> 5 atoms in the model
> > > > > Using ntpmi and ntomp settings of 1: 16, auto ( 4:4) and 2: 8 
> > > > > (
> and any other combination factoring to 16) the rating for ns/day are 
> approx. 12-16 and for any other setting ~6-8 i.e adding a card cuts 
> efficiency by half. The average load imbalance is less than 3.4% for 
> the multicard setup .
> > > > > I am not at this point trying to maximize efficiency, but only 
> > > > > to
> show some improvement going from one to two cards. According to a 2015 
> paper form the Gromacs group “ Best bang for your buck: GPU nodes for 
> GROMACS biomolecular simulations “ I should expect maybe (at best ) 
> 50% improvement for 90k atoms ( with 2x GTX 970 )
> > > > We did not benchmark GTX 970 in that publication.
> > > >
> > > > But from Table 6 you can see that we also had quite a few cases 
> > > > with
> out 80k benchmark
> > > > where going from 1 to 2 GPUs, simulation speed did not increase
> much: E.g. for the
> > > > E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 
> > > > 10
> percent.
> > > >
> > > > Did you use counter resetting for the benchnarks?
> > > > Carsten
> > > >
> > > > > What bothers me in my initial attempts is that my simulations
> became slower by adding the second GPU - it is frustrating to say the 
> least. It's like swimming backwards.
> > > > > I know am missing - as a minimum - the correct setup for mdrun 
> > > > > and
> suggestions would be welcome
> > > > > The output from the last section of the log files is included
> below.
> > > > > === ntpmi 1 ntomp:16
> ==
> > > > > <== ### 

Re: [gmx-users] using dual CPU's

2018-12-13 Thread pbuscemi
Carsten,

A possible issue...

I compiled gmx 18.3 with gcc-5 ( CUDA  9 seems to run normally )  Should 
recompile with gcc-6.4 ?

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of p buscemi
Sent: Thursday, December 13, 2018 1:38 PM
To: gmx-us...@gromacs.org
Cc: gmx-us...@gromacs.org
Subject: Re: [gmx-users] using dual CPU's

Carsten

thanks for the suggestion.
Is it necessary to use the MPI version for gromacs when using multdir? - now 
have the single node version loaded.

I'm hammering out the first 2080ti with the 32 core AMD. results are not 
stellar. slower than an intel 17-7000 But I'll beat on it some more before 
throwing in the hammer.
Paul

Sent from Mailspring 
(https://link.getmailspring.com/link/1544729553.local-d6faf123-7363-v1.5.3-420ce...@getmailspring.com/0?redirect=https%3A%2F%2Fgetmailspring.com%2F=Z214LXVzZXJzQGdyb21hY3Mub3Jn),
 the best free email app for work On Dec 13 2018, at 4:33 am, Kutzner, Carsten 
 wrote:
> Hi,
>
> > On 13. Dec 2018, at 01:11, paul buscemi  wrote:
> > Carsten,THanks for the response.
> > my mistake - it was the GTX 980 from fig 3. … I was recalling from 
> > memory….. I assume that similar
> There we measured a 19 percent performance increase for the 80k atom system.
>
> > results would be achieved with the 1060’s
> If you want to run a small system very fast, it is probably better to 
> put in one strong GPU instead of two weaker ones. What you could do 
> with your two 1060, though, is to maximize your aggregate performance 
> by running two (or even 4) simulations at the same time using the 
> -multidir argument to mdrun. For the science, probably several independent 
> trajectories are needed anyway.
> >
> > No I did not reset ,
> I would at least use the -resethway mdrun command line switch, this 
> way your measured performances will be more reliable also for shorter runs.
>
> Carsten
> > my results were a compilation of 4-5 runs each under slightly different 
> > conditions on two computers. All with the same outcome - that is ugh!. Mark 
> > had asked for the log outputs indicating some useful conclusions could be 
> > drawn from them.
> > Paul
> > > On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten  wrote:
> > > Hi Paul,
> > > > On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
> > > > Dear users ( one more try )
> > > > I am trying to use 2 GPU cards to improve modeling speed. The 
> > > > computer described in the log files is used to iron out models and am 
> > > > using to learn how to use two GPU cards before purchasing two new RTX 
> > > > 2080 ti's. The CPU is a 8 core 16 thread AMD and the GPU's are two GTX 
> > > > 1060; there are 5 atoms in the model Using ntpmi and ntomp settings 
> > > > of 1: 16, auto ( 4:4) and 2: 8 ( and any other combination factoring to 
> > > > 16) the rating for ns/day are approx. 12-16 and for any other setting 
> > > > ~6-8 i.e adding a card cuts efficiency by half. The average load 
> > > > imbalance is less than 3.4% for the multicard setup .
> > > > I am not at this point trying to maximize efficiency, but only 
> > > > to show some improvement going from one to two cards. According 
> > > > to a 2015 paper form the Gromacs group “ Best bang for your 
> > > > buck: GPU nodes for GROMACS biomolecular simulations “ I should 
> > > > expect maybe (at best ) 50% improvement for 90k atoms ( with 2x 
> > > > GTX 970 )
> > > We did not benchmark GTX 970 in that publication.
> > >
> > > But from Table 6 you can see that we also had quite a few cases 
> > > with out 80k benchmark where going from 1 to 2 GPUs, simulation 
> > > speed did not increase much: E.g. for the
> > > E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 10 
> > > percent.
> > >
> > > Did you use counter resetting for the benchnarks?
> > > Carsten
> > >
> > > > What bothers me in my initial attempts is that my simulations became 
> > > > slower by adding the second GPU - it is frustrating to say the least. 
> > > > It's like swimming backwards.
> > > > I know am missing - as a minimum - the correct setup for mdrun 
> > > > and suggestions would be welcome The output from the last section of 
> > > > the log files is included below.
> > > > === ntpmi 1 ntomp:16 
> > > > == <== ### ==> < 
> > > > A V E R A G E S > <== ### ==>
> > > >
> > > > Statistics over 29301 steps using 294 frames Energies (kJ/mol) 
> > > > Angle G96Angle Proper Dih. Improper Dih. LJ-14
> > > > 9.17533e+05 2.27874e+04 6.64128e+04 2.31214e+02 8.34971e+04
> > > > Coulomb-14 LJ (SR) Disper. corr. Coulomb (SR) Coul. recip.
> > > > -2.84567e+07 -1.43385e+05 -2.04658e+03 1.33320e+07 1.59914e+05 
> > > > Position Rest. Potential Kinetic En. Total Energy Temperature
> > > > 7.79893e+01 -1.40196e+07 1.88467e+05 -1.38312e+07 3.00376e+02 
> > > > Pres. DC (bar) Pressure (bar) Constr. rmsd
> > > > -2.88685e+00 3.75436e+01 0.0e+00
> 

Re: [gmx-users] using dual CPU's

2018-12-13 Thread pbuscemi
Szilard,

I get an "unknown command " gpustasks  in :

'mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING 

where > typically N = 4, 6, 8 are worth a try (but N <= #cores) and the > 
TASKSTRING should have N digits with either N-1 zeros and the last 1 
> or N-2 zeros and the last two 1, i.e..

Would you please complete the i.e...

Thanks again,
Paul



-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of paul buscemi
Sent: Tuesday, December 11, 2018 5:56 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] using dual CPU's

Szilard,

Thank you vey much for the information and I apologize how the text appeared - 
internet demons at work.

The computer described in the log files is a basic test rig which we use to 
iron out models. The workhorse is a many core AMD with now one and hopefully 
soon to be two 2080ti’s,  It will have to handle several 100k particles and at 
the moment do not think the simulation could be divided. These are essentially 
of  a multi component ligand adsorption from solution onto a substrate  
including evaporation of the solvent.

I saw from a 2015 paper form your group  “ Best bang for your buck: GPU nodes 
for GROMACS biomolecular simulations “ that I should expect maybe a 50% 
improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my initial 
attempts was that my simulations became slower by adding the second GPU - it 
was frustrating to say the least

I’ll give your suggestions a good workout, and report on the results when I 
hack it out..

Bes
Paul

> On Dec 11, 2018, at 12:14 PM, Szilárd Páll  wrote:
> 
> Without having read all details (partly due to the hard to read log 
> files), what I can certainly recommend is: unless you really need to, 
> avoid running single simulations with only a few 10s of thousands of 
> atoms across multiple GPUs. You'll be _much_ better off using your 
> limited resources by running a few independent runs concurrently. If 
> you really need to get maximum single-run throughput, please check 
> previous discussions on the list on my recommendations.
> 
> Briefly, what you can try for 2 GPUs is (do compare against the 
> single-GPU runs to see if it's worth it):
> mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING where 
> typically N = 4, 6, 8 are worth a try (but N <= #cores) and the 
> TASKSTRING should have N digits with either N-1 zeros and the last 1 
> or N-2 zeros and the last two 1, i.e..
> 
> I suggest to share files using a cloud storage service like google 
> drive, dropbox, etc. or a dedicated text sharing service like 
> paste.ee, pastebin.com, or termbin.com -- especially the latter is 
> very handy for those who don't want to leave the command line just to 
> upload a/several files for sharing (i.e. try "echo "foobar" | nc 
> termbin.com )
> 
> --
> Szilárd
> On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
>> 
>> 
>> 
>>> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
>>> 
>>> 
>>> Mark, attached are the tail ends of three  log files for the same 
>>> system but run on an AMD 8  Core/16 Thread 2700x, 16G ram In 
>>> summary:
>>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
>>> 6.0 ns/day.
>>> Clearly, I do not have a handle on using 2 GPU's
>>> 
>>> Thank you again, and I'll keep probing the web for more understanding.
>>> I’ve propbably sent too much of the log, let me know if this is the 
>>> case
>> Better way to share files - where is that friend ?
>>> 
>>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to 

Re: [gmx-users] using dual CPU's

2018-12-12 Thread pbuscemi
Dear users  ( one more try ) 

I am trying to use 2 GPU cards to improve modeling speed.  The computer 
described in the log files is used  to iron out models and am using to learn 
how to use two GPU cards before purchasing two new RTX 2080 ti's.  The CPU is a 
8 core 16 thread AMD and the GPU's are two GTX 1060; there are 5 atoms in 
the model

Using ntpmi and ntomp  settings of 1: 16,  auto  ( 4:4) and  2: 8 ( and any 
other combination factoring to 16)  the rating for ns/day are approx.   12-16  
and  for any other setting ~6-8  i.e adding a card cuts efficiency by half.  
The average load imbalance is less than 3.4% for the multicard setup .

 I am not at this point trying to maximize efficiency, but only to show some 
improvement going from one to two cards.   According to a 2015 paper form the 
Gromacs group  “ Best bang for your buck: GPU nodes for GROMACS biomolecular 
simulations “  I should expect maybe (at best )  50% improvement for 90k atoms 
( with  2x  GTX 970 ) What bothers me in my initial attempts is that my 
simulations became slower by adding the second GPU - it is frustrating to say 
the least. It's like swimming backwards.

I know am missing - as a minimum -  the correct setup for mdrun and suggestions 
would be welcome

The output from the last section of the log files is included below.

=== ntpmi  1  ntomp:16 ==

<==  ###  ==>
<  A V E R A G E S  >
<==  ###  ==>

Statistics over 29301 steps using 294 frames

   Energies (kJ/mol)
  Angle   G96AngleProper Dih.  Improper Dih.  LJ-14
9.17533e+052.27874e+046.64128e+042.31214e+028.34971e+04
 Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
   -2.84567e+07   -1.43385e+05   -2.04658e+031.33320e+071.59914e+05
 Position Rest.  PotentialKinetic En.   Total EnergyTemperature
7.79893e+01   -1.40196e+071.88467e+05   -1.38312e+073.00376e+02
 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -2.88685e+003.75436e+010.0e+00

   Total Virial (kJ/mol)
5.27555e+04   -4.87626e+021.86144e+02
   -4.87648e+024.04479e+04   -1.91959e+02
1.86177e+02   -1.91957e+025.45671e+04

   Pressure (bar)
2.22202e+011.27887e+00   -4.71738e-01
1.27893e+006.48135e+015.12638e-01
   -4.71830e-015.12632e-012.55971e+01

 T-PDMS T-VMOS
2.99822e+023.32834e+02


M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 V=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check2349.753264   21147.779 0.0
 NxN Ewald Elec. + LJ [F]   1771584.591744   116924583.05596.6
 NxN Ewald Elec. + LJ [V]   17953.091840 1920980.827 1.6
 1,4 nonbonded interactions5278.575150  475071.763 0.4
 Shift-X 22.173480 133.041 0.0
 Angles4178.908620  702056.648 0.6
 Propers879.909030  201499.168 0.2
 Impropers5.2741801097.029 0.0
 Pos. Restr. 42.1934402109.672 0.0
 Virial  22.186710 399.361 0.0
 Update2209.881420   68506.324 0.1
 Stop-CM 22.248900 222.489 0.0
 Calc-Ekin   44.3469601197.368 0.0
 Lincs 4414.639320  264878.359 0.2
 Lincs-Mat   100297.229760  401188.919 0.3
 Constraint-V  8829.127980   70633.024 0.1
 Constraint-Vir  22.147020 531.528 0.0
-
 Total   121056236.355   100.0
-
 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
On 1 MPI rank, each using 16 OpenMP threads

 Computing:  Num   Num  CallWall time Giga-Cycles
 Ranks Threads  Count  (s) total sum%
-
 Neighbor search1   16294   2.191129.485   1.0
 Launch GPU ops.1   16  58602   4.257251.544   2.0
 Force  

Re: [gmx-users] generic hardware assembling for gromacs simulation

2018-11-19 Thread pbuscemi
Seke,

Yes, you can do a build with the components you have. The I5 ( 4760 ?)  with
4 cores and no other threads  is not particularly fast but should work
The 1050 has some 640 or 768 cores depending on the version and will produce
approximately a factor of 5x over the CPU alone.
You will need at least 8gig Ram

After you install linux, run the following to get the latest  packages to
save some later headaches:
'sudo ap-get  update'  then
'sudo apt-get upgrade ' - this will take some time - go get
some coffee


Install Cuda following
https://linoxide.com/linux-how-to/install-cuda-ubuntu/   this will probably
install CUDA 9.1 which is sufficient.  You do not need to use the Nvidia web
site as the installation guide - unless you want to be tortured. CUDA is now
part of the ppa repository. This is a major blessing.  Check the CUDA
install with nvcc --version

Install the latest version of the repository
https://itsfoss.com/ubuntu-official-ppa-graphics/

Rerun update  BUT NOT UPGRADE

Then install the appropriate driver  for the 1050 with   'sudo apt install
nvidia-390'
  I do not know if the latest nvidia 400 series  drivers will work with the
1050 but v390 probably will.  You may have to use 3v84.  You do not need to
uninstall the prior driver, the newer installations will take care of that
task.

Then  READ and follow the basic instructions from Gromacs for installation. 
The essential items beyond the basic installation are:
Add the instruction to use GPU
check that the version of g++ and gcc match that installed with
CUDA.  You can have all the versions g++-5 to g++-7 in the same lib
location.  CUDA and Gromacs will find them if you state the location during
the build.

There are many web sites with detailed instructions but if you are diligent
( and a bit lucky ) you can go from installing linux to running Gromacs in
about 2 hours

Good luck
Paul


-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
 On Behalf Of Seketoulie
Keretsu
Sent: Monday, November 19, 2018 7:25 AM
To: gmx-us...@gromacs.org
Subject: [gmx-users] generic hardware assembling for gromacs simulation

Dear Users.

I apologise this this not exactly an GROMACs simulation question.

I am a student and currently I trying to build a linux system for gromacs
simulation. I have seen some materials about utilizing GPUs and
multiprocessor but I can't fully understand some problems. I have a system
available with the configuration below:

GPU:  Zotac 1050ti 4gb GPU

Processor: i5 quad core 3.10ghz
RAM: 8GB DDR 4 Corsair ram
Storage: 250 GB had
[also Gigabyte motherboard , 650w power supply, 500 GB external ]

Would it be possible to utilize this GPUs to enhance the MD simulation
performance? If possible would you suggest/hint how to go about this?
Would it be possible to maximise the use of the resources if the OS is
installed with proper configurations?

Thanking you.

Sincerely,
Seke
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] FW: Re: Re: liquid-solid/liquid-air interface simulations

2018-10-16 Thread pbuscemi
Dear gmx users,

I ran across this 2016 response  mentioning titania.
 
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2016-May/105969
.html

I am familiar with Gromacs using it to model protein adsorption onto
polymers.   Now I need to look at a film drying on TiO2.

What I have done to date for the air liquid is to create a  tall box and use
editconf to layer ,say  an acrylate in alcohol , in the bottom region near a
polymer surface and N2 ( as air )  in the top of the box.  Results "look "
reasonable in that the organic generally adsorbs and  the alcohol and N2
mix.  But this does not approach evaporation,  ( alcohol dissipates,  N2
hangs around )  Do I need to make a very larger box ??

For polymer surfaces I modify the n2t file to add any atom types not
included in oplsaa or 45a7  ff and then follow with x2top .  But I've not
gotten very far with this method with oxides.  This is where I could use a
few pointers.

Any hints some bullet points on creating an oxide surface  and perhaps
faults to avoid in mimicking an air liquid interface would be appreciated,

Regards,
Paul

Paul Buscemi, Ph.D.
UMN BICB


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] force field not found

2018-10-01 Thread pbuscemi
Alex, Justin,

I've managed to make and run polymers using Avogadro ,modifying the n2t, then 
creating the top using  x2top under  54a7 ff.  The method may be useful for 
others but before presenting it to the user group,  it  should be reviewed so 
that  glaring mistakes/concepts are revised.   If you think it worthwhile, 
would either of you be agreeable to reviewing the process?

Thanks
Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Alex
Sent: Sunday, September 30, 2018 12:44 AM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] force field not found

Yeah, if it is missing bonded parameters, you can always try to find 
something similar, at least with OPLS-AA -- don't really know about the 
other ff.

Alex


On 9/29/2018 8:58 PM, paul buscemi wrote:
> Alex,
>
> I wanted to practice some more with x2top using  a simple CH3 -( CH2)14 -Ch3  
> pdb model.  oplsaa works fine, but not 54a7 FF generating the erro  “ cannot 
> find forcefield for C “  Th two CH3’s do not cause the error found but the 
> fourteen CH2’s.
>
> In the ffbonded.itp bond angle types i see CH2-S-CH3  , C-CH2-C and  CH2-S_S, 
> but  not C-CH2-C.  Can I add a new atomas ga 5_55   by anology or hunt for 
> the correct parameters  ? (I’ m trying his now )
>I am assuming the n2t  nor the rtp do not have to be modified since x2top 
> does not rely on the rtp.  This is a fairly basic but essential task, and 
> would surly like to master it.
>
> Thanks,
> Paul
>
>
>
>
>> On Sep 27, 2018, at 5:47 PM, Alex  wrote:
>>
>> Never dealt with TiO2, but the path to parameterizing forcefields for
>> solid-state structures in MD is becoming more and more straightforward,
>> e.g., J. Phys. Chem. C 2017. 121(16): p. 9022-9031.
>>
>> Alex
>>
>> On Thu, Sep 27, 2018 at 4:11 PM paul buscemi  wrote:
>>
>>> Alex,
>>>
>>> There are so many important  reactions / applications in which protein
>>> polymer interactions play a role that  the ability  to generate  of
>>> polymers should be part of gromacs repertoire. I’ll keep plugging away on
>>> this and report to the community if I can break the code  - other than
>>> using the very good but terribly expensive commercial programs.   I would
>>> not doubt that many have already accomplished this this task, but it is not
>>> well tracked within this group.
>>>
>>> I might not approach a Molysulfidnitride substrate , ( making turbine
>>> blades ??)  but TiO2 is indeed another surface very popular with proteins.
>>> Most every nitinol surface is essentially TiO2.  If you have some pointers
>>> on that,  I’m listening.
>>>
>>> Thank you again for the assist.
>>>
>>> Regards
>>> Paul
>>>
>>>
>> -- 
>> Gromacs Users mailing list
>>
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] atom types not found

2018-09-27 Thread pbuscemi
Alex, 
This pertains the prior correspondence to building a polymer and is the process 
I've been developing.

To date I can  obtain an ITP and pdb from ATB for a monomer.  From there with 
information in those files, it is relatively easy to construct the n2t file to 
use in x2top.  (  I’d be happy to provide an example as a 'tutorial' of sorts). 
 X2top provides the monomer rtp for use in pdb2gmx. It has all the atom type 
information.  Thanks for the guidance on that.

The hangups are not associated with the rtp but of all things producing the pdb 
of the polymer specifically  positioning along,say, the x axis but more 
importantly, producing the pdb of the polymer that uses the same atom labels as 
the original pdb of  the monomer.  In the PE example from gromacs there  are 3 
mers of 2 atoms  so it is easy to manually keep track of the names, but not if 
you have 1000 mers.  Avogadro renames the added mers. 

Since gromacs can build proteins, and I can tell gmx that the monomer is a 
protein  ( it wants to think that it is anyway),  I will try to use the same 
logic to build the  polymer.  More to come.

Paul



-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of paul 
buscemi
Sent: Monday, September 24, 2018 9:52 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] atom types not found

Thank you for the really rapid reply.  I’ll work on it some more and report the 
outcome
Paul

> On 24,Sep 2018, at 9:04 PM, Alex  wrote:
> 
> I use x2top a whole lot, so here's an example to be considered in the
> context of what Justin just wrote:
> 
> CJ   opls_xxx0.012.011  3CJ  0.142   CJ  0.142   CJ  0.142
> 
> The total number of bonds is 3, then just list them in pairs of
> element-bond entries. If I want a different type assigned to an atom that
> only has two nearest neighbors, it'd look like:
> 
> CJ   opls_yyy0.012.011  2CJ  0.142   CJ  0.142
> 
> and so on. A very useful utility for doing solid-state stuff with gmx. Hope
> this helps.
> 
> Alex
> 
> 
> 
> The
> 
> On Mon, Sep 24, 2018 at 7:48 PM Justin Lemkul  wrote:
> 
>> 
>> 
>> On 9/24/18 9:42 PM, paul buscemi wrote:
>>> This is a version of a very old question
>>> 
>>> Using Avogadro, I’ve built an all atom version of nylon12, ( 45 atoms )
>> converted to a gro file with editconf.  I want to generate the rtp so I can
>> construct a polymer.  Using x2top, I’ve tried using both  gromos 54a7 ff
>> and oplsaaff.  there are two outcomes:
>>> 
>>> 1) if trying 54a7,  I am warned that the atomnames2types.n2t  is not
>> found ( and indeed it is not present in the ff subfolder ) .  I’ve done
>> what I think is an extensive search ( eg github, etc ), but have not found
>> a n2t for 54a7.  I tried to construct one following that found in oplsaa
>> but that has not worked out -yet.  Does 54a7ff require an n2t file  and if
>> so what is the format ?
>> 
>> x2top requires an .n2t file for any force field.
>> 
>> Sadly, my wiki page on .n2t files was somehow lost, so I will try to
>> repeat it here, in column numbers:
>> 
>> 1. Element (e.g. first character of the atom name)
>> 2. Atom type to be assigned
>> 3. Charge to be assigned
>> 4. Mass
>> 5. Number of bonds the atom forms
>> 6-onward. The element and reference bond length for N bonds (where N is
>> specified in column 5); x2top will assign a bond if the detected
>> interatomic distance is within 10% of the reference bond length
>> specified here.
>> 
>>> 2) In trying oplsaa,  I am warned  only 44 of 45 atom types are found..
>> It turns out that it is the Nitrogen that is the culprit.  If I convert the
>> nitrogen to carbon in the gro , file the top and rtp are completed.  It’s
>> hard to believe that an amide nitrogen is not in the force field.  Thinking
>> it may be  my model, I downloaded arginine from “aminoacidsguide.com” to
>> avoid Avogadro.  With Aginine only 19 of 26 atoms were found in the
>> oplassff. What ?  I can’t make an rtp for arginine without modifying the
>> ffbonded or n2t for oplsaa.Is  x2top simply not the right tool ?
>> 
>> It's not that the atom type isn't found, it's that x2top can't assign an
>> atom type because a given atom does not satisfy all of the requirements
>> of the .n2t file listed above. That means a bond tolerance likely isn't
>> being satisfied.
>> 
>> -Justin
>> 
>>> Note if I  submit the nylon pdb to ATB get back a usable itp,  and it is
>> possible to generate a small polymer this way, ( 20 mers or so ).  But I
>> should be able to construct a polymer similar to the example given  for PE
>> some 9 years ago using a beginning, middle mers.  But I need the rtp.
>>> 
>>> Thanks for any responses
>>> Paul Buscemi, Ph.D.
>>> UMN
>>> 
>>> 
>>> 
>>> 
>> 
>> --
>> ==
>> 
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Virginia Tech Department of Biochemistry
>> 
>> 303 Engel Hall
>> 340 

Re: [gmx-users] (no subject)

2018-07-21 Thread pbuscemi
Have you tried the  "insert-chemicals-after-md" command ?

PB

> On Jul 11, 2018, at 4:50 AM, Mark Abraham  wrote:
> 
> Hi,
> 
> Are you trying to observe something about the transition, or merely the
> different end points?
> 
> Mark
> 
>> On Tue, Jul 10, 2018 at 4:12 PM Soham Sarkar  wrote:
>> 
>> Dear all,
>> I am planning to do a simulation where after 50ns of simulation I want to
>> add some other chemicals in the system and continue it for another 50ns, so
>> that I can have the effect of that chemicals exclusively before and after
>> adding it to the system.Is it at all possible? If yes please tell me the
>> protocol/ commands or give me some references where this type of simulation
>> is used. Thanks in advance.
>> -Soham
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.