Hi,
Do check out gmx do_dssp -h and make sure your dssp binary is the right
version, with executable permissions, in the right place (or alternatively
the appropriate environment variable is set, so that do_dssp can find dssp)
Mark
On Fri, 1 May 2020 at 18:47, Iman Katouzian wrote:
> Good
Hi,
No, some kind of breakage, e.g. a filesystem disappeared, or a file
transfer was incomplete or the file was edited with some inappropriate tool.
Mark
On Sat, 18 Apr 2020 at 11:46, Mijiddorj B wrote:
> Dear Justin,
>
> Thank you very much for your reply. I see. However, I have one more
>
Hi,
Based on what is in your system, what do you think the behavior should be?
A change reflects that one of the versions may be wrong, but not that the
new one is necessarily it.
Mark
On Fri, 10 Apr 2020 at 08:17, Parvez Mh wrote:
> Hello All,
>
> I am wondering if gromacs-2020 is buggy? or
... and next time you might be better served by using gmx mdrun -cpi and
gmx trjconv -extend so that you have your contiguous trajectory by
construction!
Mark
On Thu., 9 Apr. 2020, 14:17 Qinghua Liao, wrote:
> Hello,
>
> There is an option of -settime for gmx trjcat, with which you can set
>
Hi,
On Sat., 28 Mar. 2020, 04:04 Guilherme Carneiro Queiroz da Silva, <
gcarnei...@pos.iq.ufrj.br> wrote:
> Hi all,
>
> I look on google for any answers for such question in this maillist, and I
> found related questions but no final answer.
>
> I wish to compute the heat flux for my system
Hi,
Intel doesn't supply a standard library, it always gets one from either a
gcc or msvc installed on the system. Which versions of gcc libstdc++ are
supported varies with the intel version, so you could follow the advice at
Hi,
There could certainly be a bug with -ddorder pp_pme, as there's no testing
done of that. If you can reproduce with a recent version of GROMACS, please
do file a bug report. (Though this week we're moving to new infrastructure,
so leave it for a day or two before trying to report it!
Mark
On
Hi,
That's benign, and probably indicates some kind of mismatch in how hwloc
was linked (e.g. headers from one version and libraries from another) or
compiled (different compiler or version). mdrun will probably say the same
thing, but 2019 has fallback code that means you don't need hwloc
Hi,
That's entirely normal for condensed-phase systems. Negative PE means the
system doesn't want to fly apart. Conventionally, zero PE is when every
particle is infinitely separated - look at the form of the expression for
energy in electrostatics (and gravity)
Mark
On Wed, 18 Mar 2020 at
Hi,
On Wed, 18 Mar 2020 at 07:02, Anh Vo wrote:
> Hi everyone,
>
> I'm trying to reconstruct in GROMACS some LAMMPS simulation to compare
> between the two. I'm simulating a lipid bilayer system with 72 POPC lipids
> and 9072 water molecules, as well as a single lipid system (1 POPC
> molecule)
Hi,
On Thu, 12 Mar 2020 at 20:39, Daniel Burns wrote:
> Hello,
>
> I'm using 2 replicas to test exchange conditions. I restarted a run and
> included the checkpoint file name (which is identical in each directory
> provided under the -multidir option). After the restart, I'm getting 100%
>
Hi,
It's very likely a good idea to get the velocity distribution right, for
the situation you're trying to model. Check out
https://www.livecomsjournal.org/article/5957-best-practices-for-foundations-in-molecular-simulations-article-v1-0
(and the other great LiveComsJ material!)
Mark
On Thu,
Hi,
Sure. With a GPU-enabled build, you can just continue running almost all
kinds of simulations.
Mark
On Wed, 11 Mar 2020 at 16:33, Snehasis Chatterjee <
snehasis.chatterje...@gmail.com> wrote:
> Dear Gromacs users,
>
> I am trying to perform a simulation for a relatively large system using
Hi,
Depends when you see them. From mdrun, you're generally blowing up (see
user guide and FAQ list). From tools, maybe your input data is somehow bad
(or bad in part, so try different parts), or you ran into a silent size
limitation in the code (see if a smaller data set works and if so file a
Hi,
The code is tested extensively on a range of compilers, so we believe it is
correct and compliant. In particular, you're using gcc 6.1.0 and GROMACS
tests with 6.4.0, so the issue might have been fixed in the meantime. As
newer versions of gcc and cuda will give better performance, I suggest
Hi,
Check out the GROMACS FAQs - this question is a very frequent one! :-)
Mark
On Mon, 9 Mar 2020 at 11:42, Sadaf Rani wrote:
> Dear Gromacs users
>
> I am running a free energy test calculation and using position restraints
> for all of the system. At the end of pressure equilibration I am
Hi,
On Sat, 22 Feb 2020 at 14:07, ZHANG Cheng <272699...@qq.com> wrote:
> Thank you Mark! Sorry could you please explain the details of "stdin" of
> "pdb2gmx"? Is there a link for it?
>
The stdin stream is a fundamental concept in how unix terminals work. The
echo tool fills that stream for
Hi,
Echo works just fine - you just need a way to script what input it injects
to the stdin of pdb2gmx. That's logic you need for any solution so is
probably easiest. The expect tool may be another option.
Mark
On Fri., 21 Feb. 2020, 21:41 ZHANG Cheng, <272699...@qq.com> wrote:
> I want to run
Hi,
Gro format doesn't support fields for forces. Either write out.g96 file, or
use gmx traj (better)
Mark
On Thu., 20 Feb. 2020, 16:43 Justin Lemkul, wrote:
>
>
> On 2/20/20 12:34 AM, 고연주 wrote:
> >
> >
> >Hello!
> >
> >I have a question about gmx trjconv -force option because it
Hi,
No. Why do you ask?
Mark
On Tue., 18 Feb. 2020, 01:57 Myunggi Yi, wrote:
> Dear users,
>
> As the same as the title,
>
> Are Gromos force fields not recommended in Gromacs?
> --
> Gromacs Users mailing list
>
> * Please search the archive at
>
Hi,
That could be caused by configuration of the parallel file system or MPI on
your cluster. If only one file descriptor is available per node to an MPI
job, then your symptoms are explained. Some kinds of compute jobs follow
such a model, so maybe someone optimized something for that.
Mark
On
Hi,
When they succeed, the doxygen targets produce files like
$builddir/docs/html/doxygen/html-*/index.xhtml
which you can open in your browser. The doxygen for the released versions
is on the web however, so it's much easier to just use or refer to that.
Mark
On Wed, 12 Feb 2020 at 16:30,
MACS was built, how the job is
being run, or how the job is using the GROMACS version.
Mark
> On Sun, Feb 9, 2020 at 2:30 PM Mark Abraham
> wrote:
>
> > Hi,
> >
> > First, make sure you can run a normal single-replica simulation with MPI
> on
> > this
Hi,
First, make sure you can run a normal single-replica simulation with MPI on
this machine, so that you know you have the mechanics right. Follow the
cluster's documentation for setting up the scripts and calling MPI. I
suspect your problem starts here, perhaps with having a suitable working
Hi,
Sounds like at least one replica isn't stable in its ensemble. Try a multi
run without replica exchange and see.
Mark
On Fri., 24 Jan. 2020, 21:38 Daniel Burns, wrote:
> Hi,
>
> I have a system containing a dimer totaling about 600 residues solvated
> with 32,000 water molecules. I have
That tells you what your CPU is capable of. We need to know whether cmake
has found the compiler that can issue the instructions. Follow the
suggestions Roland and I have made.
Mark
On Wed, 15 Jan 2020 at 10:45, Shlomit Afgin
wrote:
>
>
> I ran:
>
> cat /proc/cpuinfo | grep -i avx512
>
>
>
>
On Wed, 15 Jan 2020 at 11:07, Tru Huynh wrote:
>
> gcc is /opt/rh/devtoolset-6/root/usr/bin/gcc
>
Good, so is Cmake using it? (e.g. remove the build dir and run cmake again
to see what it first reports, or inspect $builddir/CMakeCache.txt for the
compiler setting)
Mark
--
Gromacs Users
Hi,
On Fri, 10 Jan 2020 at 10:49, João M. Damas wrote:
> Hi all,
>
> I am trying to build GROMACS in a similar fashion to what is described in
> section 2.3.5 of the manual (weblink
> <
> http://manual.gromacs.org/documentation/2019.5/install-guide/index.html#testing-gromacs-for-correctness
>
Hi,
Warnings mean your physics is probably broken, unless you can explain why
it isn't. Suggestions for fixing them are in the warnings.
Mark
On Tue., 7 Jan. 2020, 03:39 변진영, wrote:
> Dear everyone, Happy New year!
> I have gone through the Justin Lemku tutorial for Umbrella Sampling.
>
Hi,
I expect that's a hydrogen in an NH3+ group being added as a terminus to
your protein. You might want to tell pdb2gmx to stop trying to help with
the termini, with -ter or -noter.
Mark
On Sun, 29 Dec 2019 at 13:02, ali khamoushi wrote:
> Hello everyone
> I have PDB file which have
Hi,
The fftw build you are trying to do uses gcc by default, and your gcc is so
old that AVX512 didn't exist yet, so it cant compile for it. Since you're
using the Intel compiler, it's easiest to also use its fft library with
cmake -DGMX_BUILD_OWN_FFTW=OFF -DGMX_FFT_LIBRARY=mkl
Harder is to
Hi,
On Fri., 20 Dec. 2019, 09:59 Sina Omrani, wrote:
> Hi,
> I have a problem when I use CO2 model with virtual sites during energy
> minimization. I used itp file in the md tutorial and beside the fact that
> some of my water molecules can not be settled, I get a positive potential
> energy
Hi,
On Thu., 19 Dec. 2019, 18:41 Suvardhan Jonnalagadda,
wrote:
> Hi All,
>
> *"GROMACS: VERSION 4.5.5; Precision: single"*
>
This software is nearly a decade old and is no longer supported. Please
update.
I have performed an md simulation for 1 time step, on a single molecule
> with 17
Hi,
Yes that's expected. If you want to run two simulations in parallel then
you need to follow the advice in the user guide. Two plain calls to gmx
mdrun cannot work usefully.
Mark
On Fri., 13 Dec. 2019, 11:22 Nikhil Maroli, wrote:
> Initially, I tried to run 2+ jobs in my workstation with
Hi,
Those commands are listed in the user guide, please look there :-)
Mark
On Fri., 13 Dec. 2019, 10:10 Pragati Sharma, wrote:
> Hi Paul,
>
> The option -pme gpu works when I give pme order = 4 in mdp file instead of
> 3. but it gives me an increase of 6-7 ns/day.
>
> @Dave M : I am also
Hi,
If you use -DOpenCL_LIBRARY then it has to indicate the location of
libOpenCL.so because you've short-circuited any attempt to find the
library. Check out the advice at
http://manual.gromacs.org/documentation/2019/install-guide/index.html#opencl-gpu-acceleration.
If you've installed system
Hi,
Look at the log file, mdrun already reported what it was able to detect.
For example, if it can't link to the driver at run time then that's where
to investigate.
Mark
On Fri., 13 Dec. 2019, 13:18 Albert, wrote:
> Hello,
>
> I have compiled my Gromacs 2019v3 with the following command
Hi,
On Thu., 12 Dec. 2019, 20:27 Marcin Mielniczuk,
wrote:
> Hi,
>
> I'm running Gromacs on a heterogenous cluster, with one node
> significantly faster than the other. Therefore, I'd like to achieve the
> following setup:
> * run 2 or 3 PP processes and 1 PME process on the faster node (with a
Hi,
On Thu, 12 Dec 2019 at 15:03, Paul Buscemi wrote:
> What does nvidia-smi tell you?
>
That won't inform - GROMACS isn't saying it can't find GPUs. It's saying it
can't run on them because something Rahul asked for isn't implemented.
Mark
PB
>
> > On Dec 12, 2019, at 7:30 AM, John
Hi,
On Thu, 12 Dec 2019 at 14:35, Mateusz Bieniek wrote:
> Hi Gromacs,
>
> A small digression: Ideally Gromacs would make it more clear in the error
> message explaining which part is not implemented for the GPUs.
>
Indeed, and this case it is supposed to have already written on the log
file
Hi,
I suspect that you have multiple versions of hwloc on your system, and
somehow the environment is different at cmake time and make time (e.g.
different modules loaded?). If so, don't do that. Otherwise, cmake
-DGMX_HWLOC=off will work well enough. I've proposed a probably fix for
future 2019
Hi,
That sounds very much like a bug, but it's hard to say where it comes from.
Can you please open an issue at https://redmine.gromacs.org/ and attach
your .tpr files plus a log file from a failing run and the above stack
trace?
Mark
On Thu, 12 Dec 2019 at 08:37, Dave M wrote:
> Hi All,
>
>
Hi,
The trajectory forces do not include the constraint forces. You can verify
that by comparing them with those from mdrun -rerun from 2019.x, because
constraints are not applied in a rerun.
Mark
On Wed, 11 Dec 2019 at 17:39, Jacob Monroe wrote:
> Hi all,
>
> I'm working on calculating
;
> >Aleksei Iupinov Christoph Junghans Joe Jordan Dimitrios
> > Karkoulis
> > Peter KassonJiri Kraus Carsten Kutzner Per Larsson
> >Justin A. LemkulViveca LindahlMagnus Lundborg Erik
> Marklund
> >
> >
> #8 0x2aaab1039549 in gmx::CommandLineModuleManager::run(int, char**)
> ()
>from /afs/
> cad.njit.edu/linux/gromacs/intel/2019.4/bin/../lib64/libgromacs.so.4 <
> http://cad.njit.edu/linux/gromacs/intel/2019.4/lib64/libgromacs.so.4>
> #9 0x00407928 in m
OK, that looks fine.
What does the stack trace of the segfault look like? e.g.
gdb --args /the/gmx grompp -whatever -args
Mark
On Fri, 6 Dec 2019 at 15:29, Glenn (Gedaliah) Wolosh
wrote:
>
>
> > On Dec 6, 2019, at 9:12 AM, Mark Abraham
> wrote:
> >
> > Hi,
not
available in the environment.
Mark
On Thu, 5 Dec 2019 at 21:36, Glenn (Gedaliah) Wolosh
wrote:
> Thanks for the quick response. I do have the environment properly prepare
> via modules.
>
> GW
>
>
> > On Dec 5, 2019, at 3:24 PM, Mark Abraham
> wrote:
> >
&
Hi,
If, for example, you inserted 465 polymer molecules each with charge -.001
then the most appropriate path forward is to make the total charge of each
molecule be neutral. But we don't know enough about what you're doing yet.
Mark
On Thu, 5 Dec 2019 at 23:34, SAKO MIRZAIE wrote:
> Dear
Hi,
A checkpoint restart is indeed not supported. But you can be creative with
the old version of gromacs e.g. gmx grompp -f -p -c -cpi old.cpt -s
new.tpr, and then that .tpr file has the content of the .cpt in a format
that the newer gmx will be able to run (unless we actually removed
necessary
Hi,
I suspect you need to prepare the intel environment at run time like you
did before compilation. Probably you (or a module you loaded) source-d a
compilervars.sh file, and that's probably what you need here.
Mark
On Thu, 5 Dec 2019 at 20:22, Glenn (Gedaliah) Wolosh
wrote:
>
> Also posted
Hi,
What driver version is reported in the respective log files? Does the error
persist if mdrun -notunepme is used?
Mark
On Mon., 2 Dec. 2019, 21:18 Chenou Zhang, wrote:
> Hi Gromacs developers,
>
> I'm currently running gromacs 2019.4 on our university's HPC cluster. To
> fully utilize the
Hi,
gmx solvate shouldn't segfault based on a minor change to the input
coordinates, so it's probably a bug to look into. Please open an issue at
https://redmine.gromacs.org and attach inputs with a failing gmx command,
and someone can likely provide some insight!
Mark
On Thu, 14 Nov 2019 at
Hi,
gmx can only detect devices that are visible to it. Your use of slurm is
making only one device visible, so gmx can't understand what you mean with
-gpu_id 1. But you don't need to manage the same thing twice. If gmx can
only see one device and --gres won't allocate a previously allocated
Hi,
SLURM and OpenMPI do different things. SLURM is a resource manager, from
which you might request multiple compute nodes. OpenMPI is a parallelism
library that allows a program to run on those nodes. GROMACS is the
program, and it doesn't care which MPI library is in use, or which resource
Hi,
You can make a version of your topology without the bonded interactions,
regenerate the tpr and use gmx mdrun -rerun. But there is no force field
for which this is known to be useful for any purpose (ie correlate with
something physical). Additive force fields are not built to be decomposable
should use -t option. Is it like -t option ensures to use the last position
> and coordinates even though one mentions
> gen_vel as yes in .mdp file?
>
> Sincerely,
>
> Prabir
>
> On Thu, Sep 19, 2019 at 12:08 PM Mark Abraham
> wrote:
>
> > Hi,
> >
> >
Hi,
If grompp sees that your .mdp file asks for it to generate velocities, it
does so and reports it in the terminal output. You will see that for your
NVT grompp and not for your NPT grompp.
You can also use gmx dump -s the.tpr to observe whether the velocities
match the input you gave to the
Hi,
It's likely that using the same index file is the problem. The numbers it
contains are interpreted relative to the tpr file, so if you make a subset
of the tpr file, then it's on you to understand whether the necessary
indices have changed or not.
Mark
On Tue, 17 Sep 2019 at 14:58, Martin
Hi,
The file formats read by grompp haven't changed, so you can prepare your
system with a modern version of GROMACS, and call grompp for that old one.
But I have a hard time imagining a method that was implemented in 3.3.1
still being useful. Even an enhanced sampling method would have to
> > >
> > >> Hi Mark,
> > >>
> > >> Thanks!
> > >>
> > >> Interactively I could do it without any error. However, this error
> only
> > >> arises whenever I have tried to use batch mode.
> > >>
> > >
What happens when you do it interactively?
Mark
On Fri, 13 Sep 2019 at 14:21, Rajib Biswas wrote:
> Dear All,
>
> I am trying to use the post-processing tools in batch mode. I am using the
> following commands
>
> echo 18 0 | gmx_mpi energy -f traj.edr -o temperature
>
> Getting the following
rce=link_campaign=sig-email_content=webmail
> >
> Mail
> priva di virus. www.avast.com
> <
> https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail
> >
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> Il giorno gio 12 se
Hi,
There will have been reports by cmake about whether the detection program
compiled and/or ran successfully, which would be useful diagnostic
information. Please run cmake in a fresh build directory and look for that.
It is possible to run that program individually, if the issue is that it
Hi,
Those are kernels for legacy code that never use such simd anywhere
Mark
On Thu., 12 Sep. 2019, 07:16 Tatsuro MATSUOKA,
wrote:
> On GROMACS 2019.3, GROMACS cannot be built with AVX2.
>
> In gmxSimdFlags.cmake
>
> SIMD_AVX2_C_FLAGS SIMD_AVX2_CXX_FLAGS
>
erved En.-2.97263e+06 89 1.81911e+06 6.30101e+06
> >> (kJ/mol)
> >>
> >> As can be seen above the Flat-bottom posres energy is just zero during
> >> the simulation; and even stiffening the force constant from 4184
> KJ/(mo
Hi,
Putting the mdrun_mpi command into a variable makes sense, but the rest of
the command line makes sense directly, so you get the shell expansions you
might intend.
Mark
On Wed, 11 Sep 2019 at 08:53, Suman Chakrabarty
wrote:
> Dear all,
>
> I have managed to resolve this issue, which was a
Hi,
Thanks for the report - but it's probably fixed already (
http://manual.gromacs.org/documentation/2019.2/release-notes/2019/2019.2.html#fix-segmentation-fault-when-preparing-simulated-annealing-inputs)
so I suggest you get the latest 2019.x release?
Mark
On Tue, 10 Sep 2019 at 17:15,
Hi,
Thanks for the report. Do the tests pass? Particularly the simd-test binary.
Mark
On Tue, 10 Sep 2019 at 09:36, Tatsuro MATSUOKA
wrote:
> In gmxSimdFlags.cmake, it is described :
> # no AVX2-specific flag for MSVC yet
>
> However, at least MSVC 2017 and 2019, /arch:AVX2 is added.
> If I
Hi,
On Mon, 9 Sep 2019 at 12:01, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:
> Dear all,
> I am running REMD simulation with 65 replicas. When the
> simulation is running , I checked the .log file for every replica. In some
> replicas I am getting more than 10% load
ut.gro is pausing the script.
> Can this be a possible reason?
>
> On Sat, 7 Sep 2019 at 20:06, Mark Abraham
> wrote:
>
> > Hi,
> >
> > Such errors do lead to a normal gmx mdrun aborting. So the question is
> more
> > what is in your script that might affect
90 ns/day. However,
> when I don't assign the GPU but let all GPU work by:
> >gmx mdrun -v -deffnm md
> > The simulation performance is only 2 ns/day.
> >> So what is correct command to make a full use of all GPUs and achieve
> the best performance (which I expect should be much higher than 90 ns/day
&g
Hi,
The total potential energy violating the restraints is reported, so you
should see that there is an appropriate contribution there, and probably
plan to stiffen the force constant.
Mark
On Fri., 6 Sep. 2019, 17:23 Alex, wrote:
> Dear all,
> Using the flat-bottom (K=4184 KJ/(mol. nm^2))
Hi,
Without a statistical error estimation and a target pressure, the question
can't be answered. Pressure takes time to measure, and you need to do so
only after equilibration. 1.39 +/- 3 bar might be fine
Mark
On Sat., 7 Sep. 2019, 07:48 Bratin Kumar Das, <177cy500.bra...@nitk.edu.in>
wrote:
Hi,
Such errors do lead to a normal gmx mdrun aborting. So the question is more
what is in your script that might affect that?
Mark
On Sat., 7 Sep. 2019, 07:39 rajat punia, wrote:
> Hi, I am trying to run multiple (1000) md simulations using a shell script.
> Some of the simulations (say
Hi,
On Wed, 4 Sep 2019 at 12:54, sunyeping wrote:
> Dear everyone,
>
> I am trying to do simulation with a workstation with 72 core and 8 geforce
> 1080 GPUs.
>
72 cores, or just 36 cores each with two hyperthreads? (it matters because
you might not want to share cores between simulations,
Hi,
On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:
> Respected Mark Abraham,
> The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number
Hi,
We need to see your command line in order to have a chance of helping.
Mark
On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:
> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am
Hi,
Please do check out the install guide for advice on how to build for the
hardware you plan to run on. The defaults will just do the right thing for
you :-)
Mark
On Thu, 29 Aug 2019 at 14:26, Quin K wrote:
> Hi
> When I do a mdrun it gives me a message saying,
>
> Compiled SIMD: SSE2, but
Hi,
Typical force fields will have negative energies at minimize
configurations, but my guess is that your initial configuration has some
serious problem. Did you visualise the minimization start and end points?
Mark
On Tue., 27 Aug. 2019, 18:52 Dhrubajyoti Maji, wrote:
> Dear gromacs users,
led: invalid argument)
>
> . Its running on CPU, but not on GPU. Work of 3 days will take 15 days.
> Already completed simulation for other complexes. This complex creating
> problem due to vsites3. PLEASE HELP DEADLINE IS APPROACHING.
>
> On Tue, Aug 27, 2019 at 12:23 AM M
og -v -dlb yes
> -gcom 1 -nb gpu -npme 44 -ntomp 4 -ntomp_pme 6 -tunepme yes
>
> would you please help me choose a correct combinations of -npme and ...
> to get a better performance, according to the attached case.log file in my
> previous email?
> Regards,
> Alex
>
&
Hi,
The answer is still the same - if gbsa.itp has the right contents, is it
being included at the right time?
Mark
On Wed, 28 Aug 2019 at 12:57, Vedat Durmaz wrote:
> Sorry, typo: "I can NOT get my system grompped" ...
>
>
> Am 28.08.19 um 12:03 schrieb Vedat Durmaz:
> > Hi everybody,
> >
>
Hi,
There's lots of documentation and examples available. See
http://manual.gromacs.org/documentation/current/user-guide/cmdline.html#selection-syntax-and-usage
Mark
On Wed, 28 Aug 2019 at 07:14, Omkar Singh wrote:
> Hi everyone
> Can anyone help me regarding "gmx select" command making a
e atom numbers."as I was using 2018.4. So I switched to
> version-16. But now trapped in this HtoD cudaMemcpyAsync failed: invalid
> argumenterror.
>
> On Tue, Aug 27, 2019 at 12:09 AM Mark Abraham
> wrote:
>
> > Hi,
> >
> > You're running 2016.x which ha
eter Tieleman
> Teemu Virolainen Christian WennbergMaarten Wolf
> and the project leaders:
> Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>
> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> Copyright (c) 20
Hi,
You should follow the error message instructions "... they should not have
the same chain ID as the adjacent protein chain" which you know is that X.
Make the protein have a different chain ID from the rest.
Mark
On Mon, 26 Aug 2019 at 19:27, Ayesha Fatima
wrote:
> Dear Justin,
> Thank
Hi,
All versions of icc requires a standard library from an installation of
gcc. There are various dependencies between them, and your system admins
should have an idea which one is known to work well in your case. If you
need to help the GROMACS build find the right one, do check out the GROMACS
Hi,
You already have the topology for it :-) Chirality is about spatial
configuration, topologies are about connectivity. There's no need to try to
encode the spatial configuration because there is no accessible path for
the conversion, given the existence of the bonded parameters.
Hi,
As mdrun has the "feature" that when short-range nonbonded are on the GPU,
any energygroups have zero values output each step, you can continue
happily, but of course you won't have a useful analysis. If you need those
energy groups, then you'll need to plan to do a rerun (and then recent
Hi,
There's a thread oversubscription warning in your log file that you should
definitely have read and acted upon :-) I'd be running more like one PP
rank per gpu and 4 PME ranks, picking ntomp and ntomp_pme according to what
gives best performance (which could require configuring your MPI
Hi,
You can use only GROMACS versions that gromos++ supports. Hopefully they've
documented which ones those are :-)
Mark
On Tue, 20 Aug 2019 at 15:47, Johannes Hermann
wrote:
> Hi Justin,
>
> Thanks! So this is a gromos++ problem? The gromos++ developers should
> update the linking during
Hi,
You're doing a rerun from a trajectory file that probably doesn't have
velocities in it. mdrun can compute potential energies from the position
coordinates, but these other quantities can't be computed from just the
position coordinates. mdrun can't know what you're expecting to be correct,
Hi,
To which tutorial are you referring?
Mark
On Mon., 19 Aug. 2019, 19:09 Alex Mathew, wrote:
> Can anyone tell me which thermodynamics cycle was used in the tutorial? for
> FEP.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
>
Hi,
There's lots of this kind of information reported in the log file,
including the number of nodes that the MPI environment has made available
to mdrun.
Mark
On Sat, 10 Aug 2019 at 14:15, Searle Duay wrote:
> Hello!
>
> I am trying to run a simulation on two nodes. Each node has 2 GPUs and
Hi,
On Thu, 8 Aug 2019 at 18:41, Neena Susan Eappen <
neena.susaneap...@mail.utoronto.ca> wrote:
> Hello gromacs users,
>
> I am using Nose-hoover thermostat, is it correct to say that tau_t has to
> be less than or equal to nstxout/ nstvout?
No. tau_t describes the rigidity of the coupling.
Hi,
Unfortunately that version of GROMACS hasn't been tested or supported in
well over five years, so probably it is simply incompatible with modern
GPUs. You could try explicit solvent in modern GROMACS, which might be
comparably fast with that old version :-) Or AMBER if you really need
Hi,
We don't have any useful information to go on, but I'll guess that you've
edited the file e.g. on Windows using a not-very-suitable editor and the
line endings are no longer recognizable elsewhere. Try converting the file,
e.g. with the dos2unix utility.
Mark
On Mon, 5 Aug 2019 at 16:13,
Hi,
We can't tell whether or what the problem is without more information.
Please upload your .log file to a file sharing service and post a link.
Mark
On Fri, 2 Aug 2019 at 01:05, Maryam wrote:
> Dear all
> I want to run a simulation of approximately 12000 atoms system in gromacs
> 2016.6 on
Hi,
What does ls -l return when run from your working directory?
Mark
On Fri, 2 Aug 2019 at 01:49, Mohammed I Sorour
wrote:
> Hello Dr. Dallas,
>
> Yes, the nvt.tpr was created and I had it in the local directory ready for
> the mdrun.
> Yes, those copy/pastes of the commands I used.
> Since
Justin
>
> > ——
> > Carlos Navarro Retamal
> > Bioinformatic Engineering. PhD.
> > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > Simulations
> > Universidad de Talca
> > Av. Lircay S/N, Talca, Chile
> > E: carlos.navarr
1 - 100 of 3298 matches
Mail list logo