3500 cores vs 4500 cores and
See
http://hwbench.com/vgas/geforce-rtx-2080-ti-vs-geforce-gtx-titan-xp
And
https://gpu.userbenchmark.com/Compare/Nvidia-Titan-Xp-vs-Nvidia-RTX-2080-Ti/m265423vs4027
However, if you must have support, then it must be titan
-Original Message-
From:
If you actually can model water in the pore, then run a few experiments
changing the VDW radius of the water oxygen. This will give you the
limiting size of the pore and this is the parameter you actually need to
know.
-Original Message-
From:
Rahul,
What do you mean by pore size? A hole in the membrane caused by external
tension ? The space occupied by the protein ? Does lipid occupy the walls
of the pore ?
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of RAHUL
SURESH
Sent:
Alex,
Having the itp for the shorter molecule you have most of what you need. Use
x2top to create the top file for the longer molecule. Adjust, if necessary,
the atomname2type.n2t file in the ff file to create any necessary atom
types being sure to select the proper ff. Charges, bond lengths
For what it is worth: on our AMD 2990wd 32 core, 2 x 2080ti we can run 100k
atoms at ~ 100ns/day NVT , ~ 150 ns/day NPT so 8 -10 days to get that
microsecond. I'm curious to learn what kind of results you might obtain
from Oak Ridge and if the cost/clock time analysis makes it worthwhile.
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Szilárd Páll
Sent: Thursday, January 31, 2019 7:06 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] Gromacs 2018.5 with CUDA
On Wed, Jan 30, 2019 at 5:14 PM wrote:
>
> Vlad,
>
>
Run the tutorials
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Satya
Ranjan Sahoo
Sent: Wednesday, January 30, 2019 11:19 PM
To: gmx-us...@gromacs.org
Subject: [gmx-users] (no subject)
Sir,
I am a beginner to GROMACS. I was unable to understand
Vlad,
390 is an 'old' driver now. Try something simple like installing CUDA 410.x
see if that resolves the issue. if you need to update the compiler, g++ -7 may
not work, but g++ -6 does.
Do NOT install the video driver from the CUDA toolkit however. If necessary,
do that separately from
Giuseppe,
Use Avogadro, import or construct the four ( if I understand your model )
molecules in the desired orientations. Save as a pdb. Used editconf to
construct the box.
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Giuseppe R
Del
Dear Users,
I am trying to understand a model of water evaporation and am asking for a
few of suggestions or just point to a link.
So far I have modeled 1 molecules of spce water and equilibrated to a
density near 1 gm/cc. I then double the size of the box and then using
I am not expert on this subject but have recently gone through the
exercise...
Firstly, does nvidia-smi indicate both cards are active ?
Secondly, for the nvt or npt runs have you tried mdrun commands similar to
:
mdrun -deffnm file -nb gpu -gpu_id 01
or
mdrun -deffnm file -nb gpu -pme
Justin,
Thanks,
Bartimaeus
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Justin
Lemkul
Sent: Thursday, January 17, 2019 2:44 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] use of DPOSRES without pdb2gmx
On 1/17/19 3:33 PM,
Dear Users,
Suppose you do not use pdb2gmx and therefore do not use the -I option for
all constraints. Suppose further you do not generate a restraint file for
the non-protein molecules in the model.
Then what effect, if any, does setting constraints = all-bonds or h-bonds
have ?
Thanks
Mirco,
Here are the results for three runs of the million atoms DPPC
===8 core 2700x 1080ti
==
gmx mdrun -deffnm dppc.md -nb gpu -pme gpu -ntmpi 4 -ntomp 4 -npme 1 -gputasks
Core t (s) Wall t (s) (%)
Time: 5286.270 330.392 1600.0
Dear Users,
For those of you considering a workstation build and wonder about AMD
processors I have the following results using the included npt and log intro
for the villin headpiece in ~ 8000 atoms spc/e. The npt was run from a similar
nvt ( 10 steps ) . The best results were achieved
Dear users,
I had trouble getting suitable performance from an AMD 32 core TR. By
updating all the cuda drivers and runtime to v10 and using gcc,g++ -6 from
v5 -- I did try gcc-7 but Cuda 10 did not appreciate the attempt -- and
in particular removing CUDA v7 runtime.), I was able to
As suggested, compare the number of molecules/atoms implied in your top
file against that in your gro/pdb file. This is one aspect of Gromacs that
it never gets wrong.
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Bratin
Kumar Das
Sent:
Thank you - both - very much again.
The "mpir_run -npx gmx -mdrun." command was lifted from a Feb 2018
response from Szilard , to a multi gpu, user which he used as an example.
I'll crank on your pointers right now.
Paul
-Original Message-
From:
Thank you again, "I'll be back" when I sort all this out.
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of Szilárd Páll
Sent: Monday, December 17, 2018 1:16 PM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] using dual
Carsten,
A possible issue...
I compiled gmx 18.3 with gcc-5 ( CUDA 9 seems to run normally ) Should
recompile with gcc-6.4 ?
Paul
-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
On Behalf Of p buscemi
Sent: Thursday, December 13, 2018 1:38 PM
To:
Szilard,
I get an "unknown command " gpustasks in :
'mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
where > typically N = 4, 6, 8 are worth a try (but N <= #cores) and the >
TASKSTRING should have N digits with either N-1 zeros and the last 1
> or N-2 zeros and the last two
Dear users ( one more try )
I am trying to use 2 GPU cards to improve modeling speed. The computer
described in the log files is used to iron out models and am using to learn
how to use two GPU cards before purchasing two new RTX 2080 ti's. The CPU is a
8 core 16 thread AMD and the GPU's
Seke,
Yes, you can do a build with the components you have. The I5 ( 4760 ?) with
4 cores and no other threads is not particularly fast but should work
The 1050 has some 640 or 768 cores depending on the version and will produce
approximately a factor of 5x over the CPU alone.
You will need at
Dear gmx users,
I ran across this 2016 response mentioning titania.
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2016-May/105969
.html
I am familiar with Gromacs using it to model protein adsorption onto
polymers. Now I need to look at a film drying on TiO2.
What I have
Alex, Justin,
I've managed to make and run polymers using Avogadro ,modifying the n2t, then
creating the top using x2top under 54a7 ff. The method may be useful for
others but before presenting it to the user group, it should be reviewed so
that glaring mistakes/concepts are revised.
Alex,
This pertains the prior correspondence to building a polymer and is the process
I've been developing.
To date I can obtain an ITP and pdb from ATB for a monomer. From there with
information in those files, it is relatively easy to construct the n2t file to
use in x2top. ( I’d be
Have you tried the "insert-chemicals-after-md" command ?
PB
> On Jul 11, 2018, at 4:50 AM, Mark Abraham wrote:
>
> Hi,
>
> Are you trying to observe something about the transition, or merely the
> different end points?
>
> Mark
>
>> On Tue, Jul 10, 2018 at 4:12 PM Soham Sarkar wrote:
>>
27 matches
Mail list logo