Dear gmx users,Could you suggest me a correct md launch configuration. I am
using an University Cluster and would like use : Nodes 3 (3 * 20) has 60 MPI
processes. NVIDIA Tesla K20Xm: ( 2 GPUs ). Kindly suggest the optimal way to
get fast md run. I tried as the following bash script but ends
Take a look at how the python code does EXP. The information you want is
in the dhdl.xvg. Look at the documentation for this file, and read the
header. Come back with specific questions about the file if the header and
documentation are not enough.
On Fri, Feb 10, 2017 at 6:47 AM, gozde ergin
Hi Li,
What does nvidia-smi return on the machine where you try to run GROMACS? Also
is CUDA_VISIBLE_DEVICES set? I.e. what does
echo ${CUDA_VISIBLE_DEVICES?"CUDA_VISIBLE_DEVICES is unset"}
return?
Even if there are GPUs in a system they could be masked out by setting
Hi,
The error is pretty clear about the issue: GROMACS can't detect GPUs
either because there are none or because you are not able to access
them (exclusive mode, you did not request them, you disabled them via
CUDA_VISIABLE_DEVICES, etc.).
I suggest that you contact your admin as most likely an
PS: A promising path for AMD hardware is the ROCm stack
(http://gpuopen.com/compute-product/rocm/) which is the new fully open
source compute driver/runtime/compiler suite. OpenCL support is very
preliminary in v1.4 and I have not tried it myself yet, so I can not
recommend it, but hopefully soon
Dear GMX users,
I have a problem with the Gromacs jobs run on GPU node. I have the output
in the log file attached below. Anyone know if the CUDA and GROMACS are
compiled correctly according to this output?
Is so, why does the job cannot run on GPU? Thank you very much!
gmx_mpi mdrun -deffnm
Dear GMX users,
I have a problem with the Gromacs jobs run on GPU node. I have the output
in the log file attached below. Anyone know if the CUDA and GROMACS are
compiled correctly according to this output?
Is so, why does the job cannot run on GPU? Thank you very much!
gmx_mpi mdrun -deffnm
Hi,
I have no knowledge of the instability/crash with fglrx; with
AMDGPU-PRO I have seen strange hangs which *seem* to be kernel-space
issues because the machine becomes unresponsive for second to minutes
(but it typically recovers). However, I had no time to investigate
Given that the extensive
Dear Michael.
Thanks for the reply. I have already used this python code however I would like
to calculate myself by using the potential energies.
Because I need to reweigh the free energy by using exponential re-weighting
technique.
> On 10 Feb 2017, at 14:44, Michael Shirts
https://github.com/MobleyLab/alchemical-analysis
Takes gromacs dhdl.xvg output and calculate free energies by many different
methods, including BAR, MBAR and Zwanzig. See
http://www.alchemistry.org/wiki/Main_Page for more information.
On Fri, Feb 10, 2017 at 5:26 AM, gozde ergin
Hi Amit,
You patch plumed onto the gromacs source and then yes you need to
recompile gromacs. It may help to look at this thread from the plumed
google group
https://groups.google.com/forum/#!topic/plumed-users/stlK9-kaa6A where I
worked out how to do it. I was using plumed 2.1 which had some
Dear all,
I run thermodynamic integration simulation with gromacs and got the free energy
by g_bar command.
I also would like to estimate this free energy by using Zwanzig relationship of
\DeltaG = -RT ln (_i
Here U is the potential energy, right?
However the results
Dear Nikhil,
It depends, but generally no electrostatic cut-offs since there is no
dielectric medium, making the effective range of electrostatics very long. And
no pbc. If you are doing NVE with constraints you might want to increase
lincs-iter and -order from their default values.
Kind
hii
What will be the best mdp file options for production run for polymer in
vacuum simulation.?
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
Hi,
I appreciate your answer.
Simulation time is 200 ns per window.
Our pulling rate for the initial pulling simulation is 0.01, but the
interval between subsequent umbrella windows are 0.1 nm. When I try to add
additional windows to the region around 1.3 nm they drift away from that
position;
VNT error:
5.0.7
ff 43A1
error:
Invalid T coupling input: 1 groups, 2 ref-t values and 2 tau-t values
read from gromacs mail list "Precisely what it says. You specify one group to
be coupled (System), but then
provide coupling information for two groups." i have no idea to correct my own
NVT
Hello,
I have installed plumed 2.3. But I am having doubt in the patching process.
Where should I patch the executable? Do I need to recompile gromacs after
patching ?
Thanks
Amit Behera
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be
Hi Zeynep,
As you exclude the window, you lost the local information. But even excluded
PMF profile does not seem that ’smooth’.
What is your simulation time? And maybe you may think of pulling slower like
0.01 nm spacing.
bests (iyi calismalar)
> On 10 Feb 2017, at 08:57, ZEYNEP ABALI
18 matches
Mail list logo