t NIST is having the same or similar problems with
> POWER9/V100.
>
> Jon
>
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
> Boyd
> Sent: Thursday, April 23, 2
Hi,
Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.
A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so
suggestions but the problem is the difference
> between my answer and GROMACS in calculated MSD. I performed 6 ns
> simulation just for checking my MSD results and I'm not going to calculate
> the diffusion coefficient from it.
>
> On Sun, 19 Apr 2020 at 02:33, Kevin Boyd wrote
I saved positions every 10 ps for a 6000
> ps simulation. should I lower this or is there another way for using more
> trajectories?
>
> On Sun, 19 Apr 2020 at 00:10, Kevin Boyd wrote:
>
> > Hi,
> >
> > Are you talking about the reported diffusion coefficient or th
Hi,
Are you talking about the reported diffusion coefficient or the MSD vs lag
plot? You should be very careful about where you fit. By default, Gromacs
calculates MSDs at much longer lag times than you typically have good data
for. Use the -beginfit and -endfit options to restrict the fit to the
d be any
> additional speed boost if we also used AMD GPUs.
>
> Nope, haven't seen the paper, but quite interested in checking it out.
> Is this the latest version?
> https://onlinelibrary.wiley.com/doi/abs/10.1002/jcc.26011
>
> Thank you,
>
> Alex
>
> On 4/17/2020 6
Hi,
AMD CPUs work fine with Nvidia GPUs, so feel free to use AMD as a base
regardless of the GPUs you end up choosing. In my experience AMD CPUs have
had great value.
A ratio of ~4 cores/ GPU shouldn't be a problem. 256 GB of RAM is very much
overkill, but perhaps you have other uses for the
Hi,
I've had problems in the past with syntax requirements for
CMAKE_PREFIX_PATH. Try putting the path in quotes and separating with a
semicolon instead of a colon.
Kevin
On Sat, Apr 4, 2020 at 1:40 PM Wei-Tse Hsu wrote:
> *Message sent from a system outside of UConn.*
>
>
> Dear gmx users,
>
Hi -
> This setting is using 16 MPI process and using 2 OpenMP thrads per MPI
proces
With ntomp 1 you should only be getting one OpenMP thread, not sure why
that's not working. Can you post a link to a log file?
For a small system like that and a powerful GPU, you're likely going to
have some
Hi,
Yes, that's a reasonable approach. Check out gmx select, if say you know
your center of membrane's z location, you can select for phosphates > that
center point, which will give you your top leaflet.
Kevin
On Tue, Mar 17, 2020 at 12:49 PM Poncho Arvayo Zatarain <
poncho_8...@hotmail.com>
Hi,
A CPU:GPU ratio of 4:1 is fairly well balanced these days (depending on the
quality of the hardware), so you should expect to roughly double your
throughput adding a second GPU to your current system. However, that
doesn't mean
your single simulation performance will double - it's a lot more
Hi,
A few groups have done things like this to shape membranes. See
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5700167/
and
https://pubs.acs.org/doi/10.1021/acs.jctc.8b00765
I was involved in the second publication, so feel free to contact me about
implementation details if they're unclear.
Hi,
Can you send us the output of gmx_mpi --version?
I typically see illegal instructions when I compile gromacs on one
architecture but accidentally try to run it on another.
Kevin
On Thu, Feb 6, 2020 at 6:38 AM Seketoulie Keretsu
wrote:
> Dear Sir/Madam,
>
> We just installed gromacs 2019
Hi,
Can you share more information? Please upload your starting configuration
and a log file.
On Fri, Jan 3, 2020 at 10:29 AM Namit Chaudhary
wrote:
> Hi,
>
> Sorry. I didn't realize that attachments aren't uploaded. Below is a link
> for the files mentioned in the original mail.
>
>
Hi,
A few things besides any Ryzen-specific issues. First, your pinoffset for
the second one should be 16, not 17. The way yours is set up, you're
running on cores 0-15, then Gromacs will detect that your second
simulation parameters are invalid (because from cores 17-32, core 32 does
not exist)
Hi,
I also wrote up some examples on optimizing for multiple simulations on the
same node, see
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-July/126007.html
On Wed, Dec 4, 2019 at 9:36 AM Christian Blau wrote:
> Hi Matt,
>
>
> Here are a few bullet points that might help
> For GROMACS, I think the emtol
> value should be reported, but this varies by personal preference in most
> papers, unfortunately.
I had thought that emtol had no particular significance for minimizations
used to set up typical MD simulations, as long as the system was
sufficiently minimized as
Note that the solution Dallas suggested will work (along with changing the
resulting box dimensions), but that it may lead to clashes at periodic
boundaries. You may need to re-minimize (perhaps with soft-core potentials
if there are serious clashes) and re-equilibrate, which would probably
defeat
e a
> subset of CPU/GPU to one run, and start another run later using another
> unsubset of yet-unallocated CPU/GPU). Also, could you elaborate on the
> drawbacks of the MPI compilation that you hinted at?
> Gregory
>
> From: Kevin Boyd<mailto:kevin.b...@uconn.edu>
> Sent:
Hi,
I've done a lot of research/experimentation on this, so I can maybe get you
started - if anyone has any questions about the essay to follow, feel free
to email me personally, and I'll link it to the email thread if it ends up
being pertinent.
First, there's some more internet resources to
Hi,
You can accomplish that using pull-coord1-geometry=direction and an
appropriate vector (0 1 0 for the positive y axis, etc).
Kevin
On Mon, Jun 24, 2019 at 3:15 PM Reza Esmaeeli
wrote:
> Dear Gromacs Users,
> Is there any way to specify the direction of the pull along a certain axis
> in
Hi,
That totally depends on your forcefield. Most classical all-atom
forcefields are recommended to be run with a 2 femtosecond timestep.
Some coarse-grained forcefields like Martini can do 20+ fs, but you’ll
want to check the literature.
As for better sampling, you simply sample longer times
YJ9%2Bepq8BLlbqDTk%2FQk%3Dreserved=0),
>
> but Szilard assured me that it wasn't much of an issue. Indeed, the
> build worked fine with our "usual" simulations. This one experiencing
> issues (minor with 2018 and catastrophic with 2019) is new and it this
> setup isn't expec
h insert-molecules.
>
> Alex
>
> On 5/26/2019 8:31 AM, Kevin Boyd wrote:
> > Hi,
> >
> > Which version are you using? As of 2019 gmx solvate should support
> nonwater solvents and topology updating.
> >
> > If it’s not working with 2019, can you open up an
Hi,
Which version are you using? As of 2019 gmx solvate should support nonwater
solvents and topology updating.
If it’s not working with 2019, can you open up an issue on redmine.gromacs.org
and upload your use files? I can take a look.
Thanks,
Kevin
> On May 26, 2019, at 9:44 AM, Jones de
vin
On Wed, May 1, 2019 at 6:33 PM Alex wrote:
> Of course, i am not. This is the EM. ;)
>
> On Wed, May 1, 2019, 4:30 PM Kevin Boyd wrote:
>
> > Hi,
> >
> > In addition to what Mark said (and I've also found pinning to be critical
> > for performance), you're als
Hi,
In addition to what Mark said (and I've also found pinning to be critical
for performance), you're also not using the GPUs with "-pme cpu -nb cpu".
Kevin
On Wed, May 1, 2019 at 5:56 PM Alex wrote:
> Well, my experience so far has been with the EM, because the rest of the
> script (with
Hi,
We can't help without more information. Have you checked the log file to
make sure the GPUs are being seen/used? Can you post a link to a sample log
file?
Kevin
On Thu, Mar 14, 2019 at 11:57 AM 이영규 wrote:
> Dear gromacs users,
>
> I installed gromacs 2019 today. When I run gromacs, it is
Hi,
If it was something fundamentally wrong, you'd see an issue before this.
Martini is just inherently a teensy bit unstable - but this is where the
non-reproducibility of simulations comes in handy; restarting from far
enough away will likely avoid a transiently high energy event.
Kevin
On
Hi,
Your log file will definitely tell you whether PME was offloaded.
The performance gains depend on your hardware, particularly the CPU/GPU
balance. There have been a number of threads on this forum discussing this
topic, if you search back through the gmx_user archives. The gist of it is
that
Hi,
We can't help you unless you're more specific. What error is occurring?
Kevin
On Tue, Feb 5, 2019 at 1:30 AM Deepanshi wrote:
> Hello,
> I am trying to equilibrate a bilayer vesicle which I have prepared using
> martini maker of charmm-GUI. The vesicle is made up of POPC and has around
>
Hi,
The hard-coded SOL reference that Justin mentioned has been fixed in Gromacs
2018.3 and 2019. If you upgrade your gromacs version, gmx solvate should work
as intended.
Kevin
> On Jan 20, 2019, at 3:11 PM, Justin Lemkul wrote:
>
>
>
>> On 1/20/19 3:04 PM, ZHANG Cheng wrote:
>> In the
To add to Mark's comments, it's commonly the case that you want to apply
restraints based on the starting configuration, e.g. for restraining
protein positions at the beginning of a run. As the warning message says,
you can pass the same file to both -c and -r for these cases.
Also, you're
Hi,
First, with those associated errors I wouldn't say that those differences
are significant.
More to your question, Gromacs simulations with default parameters are not
generally reproducible. See the last 2 points in this section of the
reference manaul:
Hi,
If you're reporting a diffusion coefficient, they're probably looking for
you to justify that you're out of the short-time subdiffusive regime. My
experience is in bilayer simulations, where the MSD hits that regime
typically in the time lag range of ~10-20 ns.
For a qualitative estimate of
Hi,
For membrane systems you typically want to use semi-isotropic pressure
coupling. If instead you want to simulate *one* lipid (as a ligand) with a
protein in solution, you should stick to isotropic pressure coupling. I've
never heard of any anisotropic pressure coupling protocols in
Hi,
Can you send an edr and log file?
I believe your constraints are nonstandard, iirc for charmm36 constraints
should just be h-bonds, but I doubt that would cause this.
Kevin
On Tue, Oct 30, 2018 at 2:50 PM, Ramon Guixà wrote:
> Hi Mark, thanks for the quick response
>
> No transition
Hi,
Depends on what kind of restraint you are trying to apply. If you're just
trying to restrain head groups during the initial equilibration period, you
can feed -r your actual structure, since the restraints should be relative
to current positions.
If instead you're following the pore-forming
Hi,
I think everyone should have edit permissions on redmine issues. I just
checked that I could edit the same post. Sure you were logged in?
Anecdotally, I once had the same issue - it ended up being because I had
logged in on one tab with a redmine page open but was trying to edit a
different
Hi,
You need to regenerate your tpr with gromac-ls. The trr format is stable
between versions: you’d only need a new trajectory if you didn’t save
velocities.
Kevin
> On Oct 19, 2018, at 10:37 AM, Candy Deck wrote:
>
> Dear Gromacs Users,
>
> I am actually working with gromacs 5.
> I
Hi
If your membrane is curved, do you mean to ask how you can enforce membrane
*shape* rather than size? If so, fixing the box dimensions may not help
maintain shape, depending on curvature morphology involved.
Kevin
On Thu, Oct 18, 2018 at 5:20 AM lorenaz wrote:
> Hi all,
>
> I am simulating
Hi,
I'm not exactly sure what you're asking. In both cases (with protein or
without protein), the charmm-gui provides you files which need to be
minimized and equilibrated. Typically, the steps recommended in the README
file are sufficient. The only requirement of these steps it to make your
Hi,
You should just use charmm-gui's built in functionality to insert the
protein, unless you have a good reason not to.
Kevin.
On Sun, Oct 7, 2018 at 7:23 AM Olga Press wrote:
> Dear all,
> I'm new in the field of simulating membrane-protein system in gromacs by
> using charmm36-ff.
> I've
Hi,
We don't currently support energy groups on GPUs with the Verlet cutoff
scheme - see the table linked below.
http://manual.gromacs.org/documentation/2018/user-guide/cutoff-schemes.html
To enable the simulation to run on GPUs, remove the energy groups line (and
energy group exclusions, if
Hi,
There's a hacked version of Gromacs 4.5.5 that can calculate lateral
pressure profiles in a rerun. See:
https://mdstress.org/index.php/gromacs-ls/
Keep in mind that you'll need positions AND velocities saved to do the
analysis properly, and read their documentation carefully.
Kevin
On
Hi!
There are two key points here.
1) Index groups don't have to be mutually exclusive. So, you can have your
20+ indices, but also have an inclusive index that encompasses more or all
of your atoms, which you can then use for tc (or comm removal, etc.). The
only necessary thing is that the
Hi,
In general, I'd say that 20 ns is far too short for a membrane simulation,
but how long of a simulation is needed depends on what you're trying to
calculate - lipid tail dynamics are quite fast, but head group dynamics are
significantly slower. For some membrane-protein interactions, slow
Hi,
The backup files are the exact same files you originally had, just renamed -
they can be analyzed like any other gromacs files. I’d suggest renaming them
again(for instance back to their original names) to avoid any confusion, but
the content of e.g. #md300.trr.1# are identical to what
Hi,
This isn't a problem. Thread-mpi is the built-in mpi parallelization
packaged with gromacs. What that message is saying is that Gromacs will use
the openmpi library on your system instead, which is what you want when
running on multiple nodes.
Kevin
On Wed, Aug 15, 2018 at 1:50 PM, Jost,
Hi,
Can you post a link to your log file?
Also, what version of gromacs are you using? Make sure that the
documentation you are following corresponds to the right version of
gromacs. If you are using v 5.1 (as the link suggests), strongly consider
upgrading to 2018.
Kevin
On Fri, Aug 10, 2018
Hi,
You can play around with the -radius and -scale parameters if you're
getting clashes you don't like.
However, it seems like you really should be using gmx solvate. You could
accomplish your goal with
"gmx solvate -cs tip4p.gro -box 10.1103 10.34753 3.958 -maxsol 13853"
Kevin
On Fri,
Hi,
In general you're not supposed to mix C compilers. I've had linking
errors in the past, eg with using different versions of GCC between
the -DC_CMAKE_C_COMPILER and -DCUDA_HOST_COMPILER.
See this post for a discussion.
Hi,
One source of poor performance is certainly that you don't have SIMD
enabled. Try recompiling with SIMD enabled (the log file suggests
AVX_128_FMA). If you are compiling on gromacs on the same node
architecture that you plan to run gromacs on (and you really should be
doing this), it should
Hi,
To apply the restraints in the topology you need to use the define
field in the mdp file. For your case, the option would be "define =
-DPOSRES_A". Otherwise that #ifdef statement will evaluate to false
and the restraints won't be included. Also, are there end quotes
around
Hi,
Why would you want to "ruin" a perfectly good nanoparticle? :)
Could this be a visualization artifact? What kind of treatment of
periodic boundary conditions are you applying prior to visualization?
If you use a PBC option that makes your nanoparticle contiguous and
not split across
Hi,
Did you have a previous install of gromacs 5.1.2? If so, it’s potentially a
case of you having two installations of gromacs, and the first one found by
your os when you try to run gmx [command] is 5.1.
Kevin
> On May 27, 2018, at 12:10 AM, Ali Ahmed wrote:
>
> Dear
> You equilibrations are probably too short. There are some pretty slow
> processes in lipid membranes.
The original poster stated that the system crashed after microseconds
of simulation, so this is not the case.
The pressure fluctuation message could be a red herring, with a system
explosion
CHARMM36-m is the most recent release of the CHARMM forcefield, with
improved protein dynamics . You can find the force field files here:
http://mackerell.umaryland.edu/charmm_ff.shtml#gromacs
That webpage also has a list of relevant publications for you to look over.
Alternatively, the
e, May 8, 2018 at 10:01 AM, Mark Abraham <mark.j.abra...@gmail.com> wrote:
> On Tue, May 8, 2018 at 3:54 PM Kevin Boyd <kevin.b...@uconn.edu> wrote:
>
>> Thanks for the reply.
>>
>> The distro is actually relatively up to date, from what I can tell gcc
>&
Hi,
I've been trying to install gromacs 2018 on a cluster running Centos7.
In keeping with the guidelines for maximizing performance, I'm
compiling with a recent (7.3.0) GCC version. However, Cuda 9.0 on
Centos 7.x needs to be compiled with GCC 4.8.5, so my cmake command
included
Hi Chris,
My experience has been that GPUs do significantly increase performance
in Martini simulations, perhaps not quite as much as all-atom
simulations but typically at least ~2x the speed of the same system on
cpus alone. What combination of gromacs version/mdp options/hardware
are you
Hi Alex,
I think by default cmake selects the first C-compiler it runs across
while searching, and the cmake default search path may not coincide
with the order of your environmental PATH variable, so when you have
multiple versions it might not select the one you want. You can
manually set the
Hi,
I believe flat bottom potentials had a bug that affected GPUs on multiple
ranks that was fixed in version 2016.4.
http://manual.gromacs.org/documentation/2018.1/release-notes/2016/2016.4.html
If you need to use Gromacs v5, I think the patch was applied to v5.1.5 as
well.
63 matches
Mail list logo