her tool.
Please update this thread if you have further findings.
Cheers,
--
Szilárd
On Fri, Apr 24, 2020 at 10:52 PM Szilárd Páll
wrote:
>
> The following lines are found in md.log for the POWER9/V100 run:
>>
>> Overriding thread affinity set outside gmx mdrun
>> Pinni
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: Thread affinity was not set.
>
> The full md.log is available here:
> https://github.com/jdh4/running_gr
On Fri, Apr 24, 2020 at 5:55 AM Alex wrote:
> Hi Kevin,
>
> We've been having issues with Power9/V100 very similar to what Jon
> described and basically settled on what I believe is sub-par
> performance. We tested it on systems with ~30-50K particles and threads
> simply cannot be pinned.
What
Using a single thread per GPU as the linked log files show is not
sufficient for GROMACS (and any modern machine should have more than that
anyway), but I imply from your mail that this only meant to debug
performance instability?
Your performance variations with Power9 may be related that you are
Hi,
Note that the new generation Ryzen2-based CPUs perform even better than
those we benchmarked for that paper. The 3900-series Threarippers are great
for workstations, unless you need the workstation form-factor, you are
better off with servers like the TYAN GA88B8021. If so, i EPYC 1P is what
Configure with -DGMX_EXTERNAL_BLAS=OFF -DGMX_EXTERNAL_LAPACK=OFF
Cheers,
--
Szilárd
On Fri, Apr 17, 2020 at 2:07 PM Mahmood Naderan
wrote:
> Hi
> How can I disable MKL while building gromacs? With this configure command
>
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=on -DGMX_FFT_LIBRARY=fftw3
On Fri, Mar 27, 2020 at 8:30 PM Leandro Bortot
wrote:
> Dear users,
>
> I'm trying to optimize the execution of a system composed by 10
> million atoms on a multi-GPU machine with GROMACS 2020.1.
> I've followed the instructions given at
>
> https://devblogs.nvidia.com/creating-faster-m
On Sat, Apr 4, 2020 at 10:41 PM Wei-Tse Hsu wrote:
> Dear gmx users,
> Recently I've been trying to install GROMACS 2020.1. However, I encounter a
> compilation error while using the make command. The error is as follows:
>
>
>
>
>
> */usr/bin/ld: cannot find /lib/libpthread.so.0/usr/bin/ld: cann
PME gather 12 1 2502001 2303.327 79965.968 13.6
> PME 3D-FFT 12 1 5004002 2119.410 73580.828 12.5
> PME 3D-FFT Comm. 12 1 5004002 918.318 31881.804 5.4
> PME solve Elec 12 1 2502001 584.446 20290.548 3.5
>
>
On Sun, Mar 29, 2020 at 3:56 AM Miro Astore wrote:
> Hi everybody. I've been experimenting with REMD for my system running
> on 48 cores with 4 gpus (I will need to scale up to 73 replicas
> because this is a complicated system with many DOF I'm open to being
> told this is all a silly idea).
>
Hi,
Please sue the user's mailing list for questions not related to GROMACS
development.
By default. the "-gpu_id" option takes a sequence of digits corresponding
to the numeric identifiers of GPUs. In cases where there are >10 GPUs in a
system, a comma-separated string should be used, see
http:/
;
> On 27.02.20 17:59, Szilárd Páll wrote:
>
> On Thu, Feb 27, 2020 at 1:08 PM Andreas Baer wrote:
>
>> Hi,
>>
>> On 27.02.20 12:34, Szilárd Páll wrote:
>> > Hi
>> >
>> > On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer
>> wrote:
>&
On Thu, Feb 27, 2020 at 1:08 PM Andreas Baer wrote:
> Hi,
>
> On 27.02.20 12:34, Szilárd Páll wrote:
> > Hi
> >
> > On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer
> wrote:
> >
> >> Hi,
> >>
> >> with the link below, additio
.1/release-notes/2018/major/features.html#dual-pair-list-buffer-with-dynamic-pruning)
which has additional performance benefits.
Cheers,
--
Szilárd
> I know, about the nstcalcenergy, but
> I need it for several of my simulations.
Cheers,
> Andreas
>
> On 26.02.20 16:50, Szilárd
Hi,
Can you please check the performance when running on a single GPU 2019 vs
2020 with your inputs?
Also note that you are using some peculiar settings that will have an
adverse effect on performance (like manually set rlist disallowing the dual
pair-list setup, and nstcalcenergy=1).
Cheers,
-
Hi,
Indeed, there is an issue with the GPU detection code's consistency checks
that trip and abort the run if any of the detected GPUs behaves in
unexpected ways (e.g. runs out of memory during checks).
This should be fixed in an upcoming release, but until then as you have
observed, you can alwa
Hi,
Whether investing in one of the fastest or two medium-high end GPU depends
on your workload: system size, type of run, single or multiple simulations,
etc. If you have multiple simulations you can run independently or coupled
only weakly in ensemble runs (e.g. using -multidir), multiple mid-ti
Hi.
On Tue, Feb 18, 2020 at 5:11 PM Jimmy Chen wrote:
>
> Hi,
>
> When set -pme gpu in mdrun, only one rank can be set for pme, -npme 1. What
> is the reason about only one rank for pme if use gpu to offload. Is it the
> limitation or somehow?
This is a limitation of the implementation, currently
Hi Oliver,
Does this affect an installation of GROMACS? In previous reports we have
observed that the issue is only present when running "make check" in the
build tree, but not in the case of an installed version.
Cheers,
--
Szilárd
On Mon, Feb 17, 2020 at 7:58 PM Oliver Dutton wrote:
> Hello
Hi Dan,
What you describe in not an expected behaviro and it is something we should
look into.
What GROMACS version were you using? One thing that may help diagnosing the
issue is: try to disable replica exchange and run -multidir that way. Does
the simulation proceed?
Can you please open an iss
Hi,
If I understand correctly your jobs stall, what is in the log output? What
about the console? Does this happen without PLUMED?
--
Szilárd
On Tue, Feb 11, 2020 at 7:56 PM Daniel Burns wrote:
> Hi,
>
> I continue to have trouble getting an REMD job to run. It never makes it
> to the point
nstall gcc g++ build-essentials. Then I used gcc-5 g++-5 and
> specified the version in the build step, which failed. after taking that
> out and running sudo apt-get install gcc-9 g++-9 it passes "CMAKE" but
> fails in "make". Based on your suggestions I ran the command
=/usr/:/usr/local/cuda/ cmake ../
> -DGMX_GPLUSPLUS_PATH=/usr/bin/g++-5 -DCUDA_HOST_COMPILER=gcc-5
> -DCMAKE_CXX_COMPILER=g++-5 -DCMAKE_C_COMPILER=/usr/bin/gcc-5
> -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=
Hi Ryan,
The issue you linked has been worked around in the build system, so my
guess is that the issue you are seeing is not related.
I would recommend that you update your software stack to the latest version
(both CUDA 9.1 and gcc 5 are a few years old). On Ubuntu 18.04 you should
be able to g
Hi,
What hardware are you targeting? Unless you need AVX512 support, you could
just manually specify the appropriate setting in GMX_SIMD, e.g.
-DGMX_SIMD=AVX2_256 would be appropriate for most cases where AVX512 is not
supported.
Cheers,
--
Szilárd
On Wed, Jan 15, 2020 at 9:51 AM Shlomit Afgin
Good catch Kevin, that is likely an issue -- at least part of it.
Note that you can also use the mdrun -multidir functionality to avoid
having to manually manage mdrun process placement and pinning.
Another aspect is that if you leave half of the CPU cores unused, the cores
in use can boost to a
On Wed, Jan 8, 2020 at 5:00 PM Jimmy Chen wrote:
> Hi,
>
> is GPU peer access(RDMA) supported with inter-node and gmx2020 mpi version
> on NVidia GPU?
>
No, that is currently not implemented.
Cheers,
--
Szilárd
or just work only in single-node with threadMPI via Nvidia GPU direct?
>
> Thanks,
tests did have the potential energy jump issue and they were
> running on 5 different nodes.
> So I tend to believe this issue happens on any of those nodes.
>
> On Wed, Dec 4, 2019 at 1:14 PM Szilárd Páll
> wrote:
>
> > The fact that you are observing errors alo the energies
The fact that you are observing errors alo the energies to be off by so
much and that it reproduces with multiple inputs suggest that this may not
a code issue. Did you do all runs that failed on the same hardware? Have
you excluded the option that one of those GeForce cards may be flaky?
--
Szilá
> Artem
>
>
> On Sat, Oct 26, 2019 at 1:50 AM Szilárd Páll
> wrote:
>
> > Hi,
> >
> > This is an issue in one of pre-detection checks that trips due to
> > encountering exclusive / prohibited mode devices.
> >
> > You can work around this by en
Dear Bidhan Chandra Garain,
Please share the log files of your benchmarks, that will help us better
identify if there is an issue and what the issue is.
Thanks,
--
Szilárd
On Mon, Oct 28, 2019 at 8:51 AM Bidhan Chandra Garain
wrote:
> Respected Sir,
> In my lab we have recently installed a GP
Hi,
This is an issue in one of pre-detection checks that trips due to
encountering exclusive / prohibited mode devices.
You can work around this by entirely disabling the detection using the
GMX_DISABLE_GPU_DETECTION environment variable.
Cheers,
--
Szilárd
On Thu, Oct 17, 2019 at 5:01 PM Arte
Hi,
Can you please file an issue on redmine.gromacs.org with the description
you gave here?
Thanks,
--
Szilárd
On Thu, Oct 17, 2019 at 5:01 PM Artem Shekhovtsov
wrote:
> Hello!
> Problem: The launch of mdrun that does not require video cards exit with
> fatal error if at least one video card
Hi,
Please direct GROMACS usage questions to the users' list. Replying there,
make sure you are subscribed and continue the conversation there.
The issue is that you requested static library detection, but the hwloc
library dependencies are not correctly added to the GROMACS link
dependencies. Th
On Fri, Oct 18, 2019 at 4:36 PM wrote:
> On Thu, Oct 17, 2019 at 10:34:39AM +, Kutzner, Carsten wrote:
> >
> > is it intended that the thread-MPI version of mdrun 2018 does pin to its
> core
> > if started with -nt 1 -pin auto?
>
No, I don't think that's intended.
>
> I think I have a (par
Hi,
The issue is an internal error triggered by the domain decomposition not
liking 14 cores in your CPU which lead to a prime rank count.
To ensure the tests pass I suggest trying to force only one device to be
used in make check, e.g. CUDA_VISIBLE_DEVICES=0 make check; alternatively
you can run
>
> http://manual.gromacs.org/documentation/2020-beta1/release-notes/2019/2019.4.html
>
> anyway, I will have a try on 2019.4 later.
>
> looking forward to check new feature which will be on 2/3 beta release of
> 2020.
>
> Best regards,
> Jimmy
>
>
> Szilárd Páll 於 2019年1
Hi,
Can you please share your log files? we may be able to help with spotting
performance issues or bottlenecks.
However, note that for NVIDIA are the best source to aid you with
reproducing their benchmark numbers, we
Scaling across multiple GPUs requires some tuning of command line options,
ple
Hi,
Good to know your system instability issues were resolved.
(As a side-note you could have tried to use elrepo which has newer kernels
for CentOS.)
The SIMD detection should however not be failing; can you please file an
issue on redmine.gromacs.org with cmake invocation, CMakeCache.txt and
C
Hi,
I strongly recommend the Quadro RTX series, 6000 or 5000. These should not
be a lot more expensive, but will be a lot faster than the Pascal
generation cards. For comparisons see our recent paper:
https://doi.org/10.1002/jcc.26011
Cheers,
--
Szilárd
On Thu, Sep 19, 2019, 09:50 Matteo Tibert
om/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> > Mail
> > priva di virus. www.avast.com
> > <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
&g
ium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> Mail
> priva di virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E
On Thu, Sep 12, 2019 at 8:58 AM Stefano Guglielmo
wrote:
>
> I apologize for the mistake, there was a typo in the object that could be
> misleading, so I re-post with the correct object,
> sorry.
>
> -- Forwarded message -
> Da: Stefano Guglielmo
> Date: mer 11 set 2019 alle ore 1
On Thu, Sep 12, 2019 at 9:29 AM Tatsuro MATSUOKA wrote:
>
> >Those are kernels for legacy code that never use such simd anywhere
> Doy you mean that gmxSimdFlags.cmake is not used for simd detection ?
gmxSimdFlags.cmake detects the _flags_ necessary for a SIMD build. It
is gmxDetectSimd.cmake / g
Dear Tatsuro,
Thanks for the contributions!
Do the builds work out cleanly on cygwin? Are there any additional
instructions we should consider including in our installation guide?
Cheers,
--
Szilárd
On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA wrote:
>
> I have prepared gromacs binaries for
Hi,
What does the log file detection output contain? You might have linked
against a CUDA release not compatible with your drivers (e.g. too
recent).
Cheers,
--
Szilárd
On Sun, Sep 8, 2019 at 5:17 PM Mahmood Naderan wrote:
>
> Hi
> With the following config command
> cmake .. -DGMX_GPU=on -DCMA
sting:
https://github.com/ComputationalRadiationPhysics/cuda_memtest
and for memory stress testing:
https://github.com/ComputationalRadiationPhysics/cuda_memtest
Cheers,
--
Szilárd
>
> Any opinion is appreciated,
>
> thanks
>
> Il giorno mercoledì 21 agosto 2019, Szilárd Páll
&g
Hi,
You have 2x Xeon Gold 6150 which is 2x 18 = 36 cores; Intel CPUs
support 2 threads/core (HyperThreading), hence the 72.
https://ark.intel.com/content/www/us/en/ark/products/120490/intel-xeon-gold-6150-processor-24-75m-cache-2-70-ghz.html
You will not be able to scale efficiently over 8 GPUs i
e 1.
If you compare the log files of the two, you should notice that the
former used a pinstride 2 resulting in the use 28 cores while the
latter using only 14 cores; the likely reason for only a small
difference is that there is not enough CPU work to scale to 28 cores
and additionally, these spe
Hi Paul,
Please post log files, otherwise we can only guess what is limiting
the GPU utilization. Otherwise, you should be seeing considerably
higher utilization in single-GPU no-decomposition runs.
Cheers,
--
Szilárd
On Tue, Aug 20, 2019 at 7:01 PM p buscemi wrote:
>
>
> Dear Users,
> I am get
On Mon, Aug 19, 2019 at 12:00 PM tarzan p wrote:
>
> Hi all.I have a dual socket Xeon GOLD 6148 which has the capabilities for
>
> Instruction Set Extensions Intel® SSE4.2, Intel® AVX, Intel® AVX2, Intel®
> AVX-512
> but hen why si gromacs giving the error for AVX_512 but takes AVX2_256???
> A
On Mon, Aug 5, 2019 at 5:00 PM Stefano Guglielmo
wrote:
>
> Dear Paul,
> thanks for suggestions. Following them I managed to run 91 ns/day for the
> system I referred to in my previous post with the configuration:
> gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 4 -ntmpi 7 -npme 1 -gputasks
> 1
Hi,
You can get significantly better performance if you use a more recent
GROMACS version (>=2018) to pick up the improvements to GPU
acceleration (see
https://onlinelibrary.wiley.com/doi/pdf/10.1002/jcc.26011 Fig 7, top
group of bars), but 300 ns/day on a single machine is unlikely with
your syst
Hi,
I recommend that you use fewer MPI ranks and offload PME too manually
(e.g. 4 ranks 3 PP one PME) -- see the manual and recent
conversations on the list related to this topic.
Depending on your system size consider launching two runs side-by-side.
Cheers
--
Szilárd
On Sat, Aug 3, 2019 at 11
19 at 11:34, Carlos Navarro > >
> > > wrote:
> > >
> > >> Hi Szilárd,
> > >> To answer your questions:
> > >> **are you trying to run multiple simulations concurrently on the same
> > >> node or are you trying to strong-scale?
>
/www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
> > In short, alone.log -> single run in the node (using 1 gpu).
> > multi1/2/3/4.log ->4 independent simulations ran at the same time in a
> > single node. In all cases, 20 cpus are used.
> > Best regards,
> > C
On Thu, Jul 25, 2019 at 11:31 AM amitabh jayaswal
wrote:
>
> Dear All,
> *Namaskar!*
> Can GROMACS be installed and run on a Sun Solaris system?
Hi,
As long as you have modern C++ compilers and toolchain, you should be
able to do so.
> We have a robust IBM Desktop which we intend to dedicatedly
This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd
On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:
>
> Hi,
>I am running remd simulation in gromacs-2016.5. After genera
Hi,
It is not clear to me how are you trying to set up your runs, so
please provide some details:
- are you trying to run multiple simulations concurrently on the same
node or are you trying to strong-scale?
- what are you simulating?
- can you provide log files of the runs?
Cheers,
--
Szilárd
ght
concern if you want to scale across many GPUs.
I hope that helps, let me know if you have any other questions!
Cheers,
--
Szilárd
> Thanks again for the interesting information and practical advice on this
> topic.
>
> Mike
>
>
> > On Jul 18, 2019, at 2:21 AM, Szi
Hi,
Are you sure you mean iOS not OS X?
What does not work, an error message / cmake output would be more useful.
cmake generally does detect your system C++ compiler if there is one.
Cheers
--
Szilárd
On Thu, Jul 18, 2019 at 4:55 PM andrew goring
wrote:
> Hi,
>
> I need to install the lates
required
dependencies.
--
Szilárd
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
> Páll
> Sent: Tuesday, 9 July 2019 10:46 PM
> To: Discussion lis
David,
Yes, it is greatly affected. The standard interaction kernels are very
fast, but the free energy kernels are known to not be as efficient as they
could and the larger the fraction of atoms involved in perturbed
interactions the more this work dominates the runtime.
If you are trying to set
Is sphinx detected by cmake though?
--
Szilárd
On Wed, Jul 17, 2019 at 8:00 PM Michael Brunsteiner
wrote:
> hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DCMAKE_C_COMPILER=gcc-7 -DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on
> -DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin
> -
16 cores and 60 PCIe lanes.
Also note that, but more cores always win when the CPU performance matters
and while 8 cores are generally sufficient, in some use-cases it may not be
(like runs with free energy).
--
Szilárd
On Thu, Jul 18, 2019 at 10:08 AM Szilárd Páll
wrote:
> On Wed, Jul
Mike
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd
> Páll
> Sent: Wednesday, July 17, 2019 8:14 AM
> To: Discussion list for GROMACS users
> Subject: [*
en 3900X and two
> Quadro 2080 Ti be a good choice?
>
> Again, thanks!
>
> Alex
>
>
> On 7/16/2019 8:41 AM, Szilárd Páll wrote:
> > Hi Alex,
> >
> > On Mon, Jul 15, 2019 at 8:53 PM Alex wrote:
> >> Hi all and especially Szilard!
> >>
&g
Hi,
Lower performe especially with GPUs is not unexpected, but what you report
is unusually large. I suggest you post your mdp and log file, perhaps there
are some things to improve.
--
Szilárd
On Wed, Jul 17, 2019 at 3:47 PM David de Sancho
wrote:
> Hi all
> I have been doing some testing fo
On Wed, Jul 17, 2019 at 2:13 PM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:
> Hi Benson,
> thanks for your answer and sorry for my delay: in the meantime I had to
> restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA
> 10.1, I re-compiled Gromacs 2019.2 with the fol
On Wed, Jul 10, 2019 at 2:18 AM Stefano Guglielmo <
stefano.guglie...@unito.it> wrote:
> Dear all,
> I have a centOS machine equipped with two RTX 2080 cards, with nvidia
> drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the log
> reported the following message:
>
> GROMACS vers
Hi Alex,
On Mon, Jul 15, 2019 at 8:53 PM Alex wrote:
>
> Hi all and especially Szilard!
>
> My glorious management asked me to post this here. One of our group
> members, an ex-NAMD guy, wants to use Gromacs for biophysics and the
> following basics have been spec'ed for him:
>
> CPU: Xeon Gold 6
PS: have you tried mixed mode PME (-pmefft)? That could avoid the
Apple OpenCL with clFFT issue.
--
Szilárd
On Mon, Jul 15, 2019 at 4:39 PM Szilárd Páll wrote:
>
> Hi,
>
> Thanks for the detailed report. Unfortunately, it seems that there is
> indeed an Apple OpenCL compiler issu
Hi,
Thanks for the detailed report. Unfortunately, it seems that there is
indeed an Apple OpenCL compiler issue with clFFT as you observe, but I
am not convinced this is limited to AMD GPUs. I do not believe you are
getting PME offload with the Intel iGPU: PME offload support is
disabled on Intel
char** argv)
> {
> (void)argv;
> #ifndef CL_VERSION_1_0
> return ((int*)(&CL_VERSION_1_0))[argc];
> #else
> (void)argc;
> return 0;
> #endif
> }
>
>
> Guessing it is time to give up
>
> Cheers
> James
>
>
>
>
> -Original
Dear James,
Unfortunately, we have very little experience with OpenCL on Windows, so I
am afraid I can not advise you on specifics. However, note that the only
part of the former SDK that is needed is the OpenCL headers and loader
libraries (libOpenCL) which is open source software that can be obt
On Fri, May 24, 2019 at 2:37 PM Pragati Sharma
wrote:
> Thanks. Is there any way to check which release of GROMACS supports which
> architecture of NVIDIA GPUs.
>
> On Fri, May 24, 2019 at 4:48 PM Szilárd Páll
> wrote:
>
> > Note that if it is indeed a Quadro 5000 (not
Note that if it is indeed a Quadro 5000 (not P5000, M5000, or K5000) you
will need an older GROMACS release as the architecture of that GPUs has
been deprecated.
--
Szilárd
On Fri, May 24, 2019 at 8:59 AM Pragati Sharma
wrote:
> Hello users,
>
> I am trying to install gromacs-2019 on a HP work
As far as I recall at most 2 ranks were supported; use OpenMP, I suggest.
--
Szilárd
On Sat, May 11, 2019 at 3:11 PM Halima Mouhib wrote:
> Hi,
> I have a question on how to run implicit water simulations using the
> Gromacs5.x series.
> Unfortunately, there is a problem with the domain decomp
That is just a hint, if you measured and you are getting better performance
with 10 threads, use that setting. (Also note that the message suggests "4
to 6" rather than "4 or 6" threads).
Do you also get the same note with the 2019 release?
--
Szilárd
On Fri, May 10, 2019 at 6:27 PM Téletchéa S
On Thu, May 9, 2019 at 10:01 PM Alex wrote:
> Okay, we're positively unable to run a Gromacs (2019.1) test on Power9. The
> test procedure is simple, using slurm:
> 1. Request an interactive session: > srun -N 1 -n 20 --pty
> --partition=debug --time=1:00:00 --gres=gpu:1 bash
> 2. Load CUDA libra
Share a log file please so we can see the hardware detected, command line
options, etc.
--
Szilárd
On Sun, May 5, 2019 at 3:53 AM Maryam wrote:
> Hello Reza
> Yes I complied it with GPU and the version of CUDA is 9.1. Any suggestions?
> Thanks.
>
> On Sat., May 4, 2019, 1:45 a.m. Reza Esmaeeli,
Power9 (for HPC) is 4-way SMT, so make sure to try 1,2, and 4 threads per
core (stride 4, 2, and 1 respectively). Especially if you are offloading
all force computing to the GPU, what remains on the couch may not be able
to benefit from more than 1-2 threads per core.
--
Szilárd
On Thu, May 2, 2
mflex-cf/logs/master/make_2019-04-23.log
>
> make check logs:
>
> https://raw.githubusercontent.com/circumflex-cf/logs/master/makecheck_2019-04-23.log
>
> Also here is one the regression tests that failed.
> https://github.com/circumflex-cf/logs/tree/master/orientation-restrain
ex_2019.2.tar.gz?dl=0
>
> If you can help us figure this out, it will be great!
>
> Thanks,
>
> Alex
>
> On 4/29/2019 4:25 AM, Szilárd Páll wrote:
> > Hi,
> >
> > I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests
> are
> > downl
kdan2417/regression_complex_2019.2.tar.gz?dl=0
>
> If you can help us figure this out, it will be great!
>
> Thanks,
>
> Alex
>
> On 4/29/2019 4:25 AM, Szilárd Páll wrote:
> > Hi,
> >
> > I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests
the local
> gromacs build directory? I mean, I could make the entire directory a
> tarball, but not sure it's all that necessary. I don't remember which tests
> failed, unfortunately...
>
> Thank you!
>
> Alex
>
> On 4/25/2019 2:54 AM, Szilárd Páll wrote:
> &g
Hi,
That unfortunately looks like Apple's OpenCL not playing well with the
clFFT OpenCL library.
Avoiding offloading PME to the GPU will allow using GPU acceleration, I
think. Can you please try to run a simulation manually and pass "-pme cpu"
on the command line?
Can you also please file a repor
le.
>
> Distributor ID: Ubuntu
>
> Description:Ubuntu 16.04.6 LTS
>
> Release:16.04
>
> Codename: xenial
>
> Ubuntu GLIBC 2.23-0ubuntu11
>
>
> On 4/24/2019 4:57 AM, Szilárd Páll wrote:
> > What OS are you using? There are some known iss
What OS are you using? There are some known issues with the Ubuntu 18.04 +
glibc 2.27 which could explain the errors.
--
Szilárd
On Wed, Apr 17, 2019 at 2:32 AM Alex wrote:
> Okay, more interesting things are happening.
> At the end of 'make' I get a bunch of things like
>
> ..
The warning are harmless, something happened in the build infrastructure
which emits some new warnings that we've not caught before the release.
--
Szilárd
On Wed, Apr 17, 2019 at 1:43 AM Alex wrote:
> Hi all,
>
> I am building the 2019.2 version, latest CUDA libs (older 2018 version
> works fi
The benchmark systems are the ones commonly used in GROMACS performance
evaluation
ADH is a 90k/134k system (dodec/cubic) and RNAse is 19k/24k (dodec/cubic)
both setup up with AMBER FF standard setting (referenced can be found on
this admittedly dated page: http://www.gromacs.org/GPU_acceleration)
umf...@disroot.org> wrote:
> Hello Szilárd,
>
> Do you mean log files created in each regression test?
>
> On 23/04/19 3:43 PM, Szilárd Páll wrote:
> > What is the hardware you are running this on? Can you share a log file,
> > please?
> > --
> > Szilárd
> &
What is the hardware you are running this on? Can you share a log file,
please?
--
Szilárd
On Mon, Apr 22, 2019 at 9:24 AM Cameron Fletcher (CF) <
circumf...@disroot.org> wrote:
> Hello,
>
> I have installed gromacs 2019.1 on CentOS 7.6 .
> While running regressions tests 2019.1 certain tests ar
d CUDA 10.1 (which is
> required for CUDA support of gcc 8), which seemed to fix the problem in
> that case.
>
> If you still have problems we can look at this some more.
>
> Jon
>
> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
>
Hi,
One more test I realized it may be relevant considering that we had a
similar report earlier this year on similar CPU hardware:
can you please compile with -DGMX_SIMD=AVX2_256 and rerun the tests?
--
Szilárd
On Tue, Apr 9, 2019 at 8:35 PM Szilárd Páll wrote:
> Dear Stefanie,
>
&g
hricht-
> Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto:
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von Szilárd
> Páll
> Gesendet: Freitag, 29. März 2019 01:24
> An: Discussion list for GROMACS users
> Betreff: Re: [gmx-users] WG: WG: Issu
On Mon, Apr 1, 2019 at 5:08 PM Jochen Hub wrote:
> Hi Åke,
>
> ah, thanks, we had indeed a CUDA 8.0 on our Debian. So we'll try to
> install CUA 10.1.
>
> But as a side question: Doesn't the supported gcc version strongly
> depend on the Linux distribution, see here:
>
> https://docs.nvidia.com/c
:
> https://it-service.zae-bayern.de/Team/index.php/s/mMyt3MPEfRrn8Ge
>
>
> Thanks again for all your support and fingers crossed!
>
> Best wishes,
> Steffi
>
>
>
>
>
> -Ursprüngliche Nachricht-
> Von: gromacs.org_gmx-users-boun...@maillist.sys.kt
lly these tests will shed some light on what the issue is.
Cheers,
--
Szilard
Again, a lot of thank for your support.
> Best wishes,
> Steffi
>
>
>
>
>
>
>
>
>
>
> -Ursprüngliche Nachricht-
> Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
le2:1177: recipe for target
> 'CMakeFiles/check.dir/rule' failed
> make[1]: *** [CMakeFiles/check.dir/rule] Error 2
> Makefile:626: recipe for target 'check' failed
> make: *** [check] Error 2
>
> Many thanks again.
> Best wishes,
>
1 - 100 of 754 matches
Mail list logo