Hi,
You can safely ignore the errors as these are caused by properties of your
hardware that the test scripts are not dealing well enough -- though
admittedly, two of the three errors should be avoided along a message
similar to this
"Mdrun cannot use the requested (or automatic) number of OpenMP
Hi Cameron,
My strong suspicion is that the NVIDIA OpenCL driver/compiler simply does
not support or is buggy on Turing. I've just checked and an OpenCL build
with the latest 418 drivers and it also fails tests on Volta (which is
similar to the Turing architecture), but it passes on Pascal.
You
ok thanks i'll try this
Regards
On Mon, Apr 29, 2019 at 5:23 AM Justin Lemkul wrote:
>
>
> On 4/25/19 8:49 AM, neelam wafa wrote:
> > Hi!
> > I have run 5ns simulation of protein ligand complex and got its rmsd plot
> > using gmx_rms. How can i get average rmsd value for this simulation. Is
>
Oh, cool -- thanks! I guess we will be replacing the older builds, then.
Funny enough, you may not recall it, but the hardware for particular box
was purchased with your own advice, which served us very well up until
now. :)
Again, thank you.
Alex
On 5/1/2019 5:07 AM, Szilárd Páll wrote:
Well, my experience so far has been with the EM, because the rest of the
script (with all the dynamic things) needed that to finish. And it
"finished" by hitting the wall. However, your comment does touch upon what
to do with thread pinning and I will try to set '-pin on' throughout to see
if
Dear all,
Does anybody know how to calculate the Flory-Huggins parameter? The engine
for all atoms MD simulations is Gromacs and the system of interest is an
emulsion of epoxy resin and surfactant in water.
Thank you.
Best regards,
Alexander
--
Gromacs Users mailing list
* Please search the
Hi,
In addition to what Mark said (and I've also found pinning to be critical
for performance), you're also not using the GPUs with "-pme cpu -nb cpu".
Kevin
On Wed, May 1, 2019 at 5:56 PM Alex wrote:
> Well, my experience so far has been with the EM, because the rest of the
> script (with
Hi,
>Of course, i am not. This is the EM. ;)
I haven't looked back at the code, but IIRC EM can use GPUs for the
nonbondeds, just not the PME. I just double-checked on one of my systems
with 10 cores and a GTX 1080 Ti, offloading to the GPU more than doubled
the minimization speed.
Kevin
On
Of course, i am not. This is the EM. ;)
On Wed, May 1, 2019, 4:30 PM Kevin Boyd wrote:
> Hi,
>
> In addition to what Mark said (and I've also found pinning to be critical
> for performance), you're also not using the GPUs with "-pme cpu -nb cpu".
>
> Kevin
>
> On Wed, May 1, 2019 at 5:56 PM
Well, unless something important has changed within a year, I distinctly
remember being advised here not to offload anything to GPU for EM. Not
that we ever needed to, to be honest...
In any case, we appear to be dealing with build issues here.
Alex
On 5/1/2019 5:09 PM, Kevin Boyd wrote:
Hi all,
Our institution decided to be all fancy, so now we have a bunch of Power9
nodes, each with 80 cores + 4 Volta GPUs. Stuff is managed by slurm. Today
I did a simple EM ('gmx mdrun -ntomp 4 -ntmpi 4 -pme cpu -nb cpu') and the
performance is abysmal, I would guess 100 times slower than on
Hi,
As with x86, GROMACS uses SIMD intrinsics on POWER9 and is thus fairly
insensitive to the compiler's vectorisation abilities. GCC is the only
compiler we've tested, as xlc can't compile simple C++11. As everywhere,
you should use the latest version of gcc, as IBM spent quite some years
12 matches
Mail list logo