Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
ool. Please update this thread if you have further findings. Cheers, -- Szilárd On Fri, Apr 24, 2020 at 10:52 PM Szilárd Páll wrote: > > The following lines are found in md.log for the POWER9/V100 run: >> >> Overriding thread affinity set outside gmx mdrun >> Pinning th

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
> The following lines are found in md.log for the POWER9/V100 run: > > Overriding thread affinity set outside gmx mdrun > Pinning threads with an auto-selected logical core stride of 128 > NOTE: Thread affinity was not set. > > The full md.log is available here: >

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
On Fri, Apr 24, 2020 at 5:55 AM Alex wrote: > Hi Kevin, > > We've been having issues with Power9/V100 very similar to what Jon > described and basically settled on what I believe is sub-par > performance. We tested it on systems with ~30-50K particles and threads > simply cannot be pinned.

Re: [gmx-users] GROMACS performance issues on POWER9/V100 node

2020-04-24 Thread Szilárd Páll
Using a single thread per GPU as the linked log files show is not sufficient for GROMACS (and any modern machine should have more than that anyway), but I imply from your mail that this only meant to debug performance instability? Your performance variations with Power9 may be related that you

Re: [gmx-users] Spec'ing for new machines (again!)

2020-04-21 Thread Szilárd Páll
Hi, Note that the new generation Ryzen2-based CPUs perform even better than those we benchmarked for that paper. The 3900-series Threarippers are great for workstations, unless you need the workstation form-factor, you are better off with servers like the TYAN GA88B8021. If so, i EPYC 1P is what

Re: [gmx-users] Disabling MKL

2020-04-21 Thread Szilárd Páll
Configure with -DGMX_EXTERNAL_BLAS=OFF -DGMX_EXTERNAL_LAPACK=OFF Cheers, -- Szilárd On Fri, Apr 17, 2020 at 2:07 PM Mahmood Naderan wrote: > Hi > How can I disable MKL while building gromacs? With this configure command > > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=on

Re: [gmx-users] Multi-GPU optimization, "DD without halo exchange is not supported"

2020-04-06 Thread Szilárd Páll
On Fri, Mar 27, 2020 at 8:30 PM Leandro Bortot wrote: > Dear users, > > I'm trying to optimize the execution of a system composed by 10 > million atoms on a multi-GPU machine with GROMACS 2020.1. > I've followed the instructions given at > >

Re: [gmx-users] Unable to compile GROMACS 2020.1 using GNU 7.5.0

2020-04-06 Thread Szilárd Páll
On Sat, Apr 4, 2020 at 10:41 PM Wei-Tse Hsu wrote: > Dear gmx users, > Recently I've been trying to install GROMACS 2020.1. However, I encounter a > compilation error while using the make command. The error is as follows: > > > > > > */usr/bin/ld: cannot find /lib/libpthread.so.0/usr/bin/ld:

Re: [gmx-users] replica exchange simulations performance issues.

2020-03-31 Thread Szilárd Páll
12 1 2502001 2303.327 79965.968 13.6 > PME 3D-FFT 12 1 5004002 2119.410 73580.828 12.5 > PME 3D-FFT Comm. 12 1 5004002 918.318 31881.804 5.4 > PME solve Elec 12 1 2502001 584.446 20290.548 3.5 > > - > >

Re: [gmx-users] replica exchange simulations performance issues.

2020-03-30 Thread Szilárd Páll
On Sun, Mar 29, 2020 at 3:56 AM Miro Astore wrote: > Hi everybody. I've been experimenting with REMD for my system running > on 48 cores with 4 gpus (I will need to scale up to 73 replicas > because this is a complicated system with many DOF I'm open to being > told this is all a silly idea). >

Re: [gmx-users] [gmx-developers] The setup of gpu_id has a bug?

2020-03-10 Thread Szilárd Páll
Hi, Please sue the user's mailing list for questions not related to GROMACS development. By default. the "-gpu_id" option takes a sequence of digits corresponding to the numeric identifiers of GPUs. In cases where there are >10 GPUs in a system, a comma-separated string should be used, see

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-03-09 Thread Szilárd Páll
; > On 27.02.20 17:59, Szilárd Páll wrote: > > On Thu, Feb 27, 2020 at 1:08 PM Andreas Baer wrote: > >> Hi, >> >> On 27.02.20 12:34, Szilárd Páll wrote: >> > Hi >> > >> > On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer >> wrote: &g

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Szilárd Páll
On Thu, Feb 27, 2020 at 1:08 PM Andreas Baer wrote: > Hi, > > On 27.02.20 12:34, Szilárd Páll wrote: > > Hi > > > > On Thu, Feb 27, 2020 at 11:31 AM Andreas Baer > wrote: > > > >> Hi, > >> > >> with the link below, additio

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-27 Thread Szilárd Páll
.1/release-notes/2018/major/features.html#dual-pair-list-buffer-with-dynamic-pruning) which has additional performance benefits. Cheers, -- Szilárd > I know, about the nstcalcenergy, but > I need it for several of my simulations. Cheers, > Andreas > > On 26.02.20 16:50, Szilárd

Re: [gmx-users] Performance issues with Gromacs 2020 on GPUs - slower than 2019.5

2020-02-26 Thread Szilárd Páll
Hi, Can you please check the performance when running on a single GPU 2019 vs 2020 with your inputs? Also note that you are using some peculiar settings that will have an adverse effect on performance (like manually set rlist disallowing the dual pair-list setup, and nstcalcenergy=1). Cheers,

Re: [gmx-users] Fw: cudaFuncGetAttributes failed: out of memory

2020-02-26 Thread Szilárd Páll
Hi, Indeed, there is an issue with the GPU detection code's consistency checks that trip and abort the run if any of the detected GPUs behaves in unexpected ways (e.g. runs out of memory during checks). This should be fixed in an upcoming release, but until then as you have observed, you can

Re: [gmx-users] GPU considerations for GROMACS

2020-02-24 Thread Szilárd Páll
Hi, Whether investing in one of the fastest or two medium-high end GPU depends on your workload: system size, type of run, single or multiple simulations, etc. If you have multiple simulations you can run independently or coupled only weakly in ensemble runs (e.g. using -multidir), multiple

Re: [gmx-users] Regarding to pme on gpu

2020-02-21 Thread Szilárd Páll
Hi. On Tue, Feb 18, 2020 at 5:11 PM Jimmy Chen wrote: > > Hi, > > When set -pme gpu in mdrun, only one rank can be set for pme, -npme 1. What > is the reason about only one rank for pme if use gpu to offload. Is it the > limitation or somehow? This is a limitation of the implementation,

Re: [gmx-users] Fwd: Compiling with OpenCL for Macbook AMD Radeon Pro 560 GPU

2020-02-18 Thread Szilárd Páll
Hi Oliver, Does this affect an installation of GROMACS? In previous reports we have observed that the issue is only present when running "make check" in the build tree, but not in the case of an installed version. Cheers, -- Szilárd On Mon, Feb 17, 2020 at 7:58 PM Oliver Dutton wrote: >

Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi Dan, What you describe in not an expected behaviro and it is something we should look into. What GROMACS version were you using? One thing that may help diagnosing the issue is: try to disable replica exchange and run -multidir that way. Does the simulation proceed? Can you please open an

Re: [gmx-users] REMD stall out

2020-02-17 Thread Szilárd Páll
Hi, If I understand correctly your jobs stall, what is in the log output? What about the console? Does this happen without PLUMED? -- Szilárd On Tue, Feb 11, 2020 at 7:56 PM Daniel Burns wrote: > Hi, > > I continue to have trouble getting an REMD job to run. It never makes it > to the point

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-31 Thread Szilárd Páll
gcc g++ build-essentials. Then I used gcc-5 g++-5 and > specified the version in the build step, which failed. after taking that > out and running sudo apt-get install gcc-9 g++-9 it passes "CMAKE" but > fails in "make". Based on your suggestions I ran the commands at the t

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-30 Thread Szilárd Páll
PATH=/usr/bin/g++-5 -DCUDA_HOST_COMPILER=gcc-5 > -DCMAKE_CXX_COMPILER=g++-5 -DCMAKE_C_COMPILER=/usr/bin/gcc-5 > -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON > -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DCMAKE_BUILD_TYPE=Debug > -D_FORCE_INLINES=OFF > > Received diff

Re: [gmx-users] gromacs-2020 build gcc/nvcc error

2020-01-29 Thread Szilárd Páll
Hi Ryan, The issue you linked has been worked around in the build system, so my guess is that the issue you are seeing is not related. I would recommend that you update your software stack to the latest version (both CUDA 9.1 and gcc 5 are a few years old). On Ubuntu 18.04 you should be able to

Re: [gmx-users] Error: Cannot find AVX 512F compiler flag

2020-01-15 Thread Szilárd Páll
Hi, What hardware are you targeting? Unless you need AVX512 support, you could just manually specify the appropriate setting in GMX_SIMD, e.g. -DGMX_SIMD=AVX2_256 would be appropriate for most cases where AVX512 is not supported. Cheers, -- Szilárd On Wed, Jan 15, 2020 at 9:51 AM Shlomit Afgin

Re: [gmx-users] Gromacs 2019 - Ryzen Architecture

2020-01-09 Thread Szilárd Páll
Good catch Kevin, that is likely an issue -- at least part of it. Note that you can also use the mdrun -multidir functionality to avoid having to manually manage mdrun process placement and pinning. Another aspect is that if you leave half of the CPU cores unused, the cores in use can boost to a

Re: [gmx-users] is GPU peer access(RDMA) supported with inter-node and gmx2020 mpi version?

2020-01-09 Thread Szilárd Páll
On Wed, Jan 8, 2020 at 5:00 PM Jimmy Chen wrote: > Hi, > > is GPU peer access(RDMA) supported with inter-node and gmx2020 mpi version > on NVidia GPU? > No, that is currently not implemented. Cheers, -- Szilárd or just work only in single-node with threadMPI via Nvidia GPU direct? > > Thanks,

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-05 Thread Szilárd Páll
tests did have the potential energy jump issue and they were > running on 5 different nodes. > So I tend to believe this issue happens on any of those nodes. > > On Wed, Dec 4, 2019 at 1:14 PM Szilárd Páll > wrote: > > > The fact that you are observing errors alo the

Re: [gmx-users] Gromacs 2019.4 - cudaStreamSynchronize failed issue

2019-12-04 Thread Szilárd Páll
The fact that you are observing errors alo the energies to be off by so much and that it reproduces with multiple inputs suggest that this may not a code issue. Did you do all runs that failed on the same hardware? Have you excluded the option that one of those GeForce cards may be flaky? --

Re: [gmx-users] Fatal Error when launching gromacs 2019.2 on GPU.

2019-10-28 Thread Szilárd Páll
t; > Artem > > > On Sat, Oct 26, 2019 at 1:50 AM Szilárd Páll > wrote: > > > Hi, > > > > This is an issue in one of pre-detection checks that trips due to > > encountering exclusive / prohibited mode devices. > > > > You can work around this by en

Re: [gmx-users] Reg: GPU use

2019-10-28 Thread Szilárd Páll
Dear Bidhan Chandra Garain, Please share the log files of your benchmarks, that will help us better identify if there is an issue and what the issue is. Thanks, -- Szilárd On Mon, Oct 28, 2019 at 8:51 AM Bidhan Chandra Garain wrote: > Respected Sir, > In my lab we have recently installed a

Re: [gmx-users] Fatal Error when launching gromacs 2019.2 on GPU.

2019-10-25 Thread Szilárd Páll
Hi, This is an issue in one of pre-detection checks that trips due to encountering exclusive / prohibited mode devices. You can work around this by entirely disabling the detection using the GMX_DISABLE_GPU_DETECTION environment variable. Cheers, -- Szilárd On Thu, Oct 17, 2019 at 5:01 PM

Re: [gmx-users] Fatal Error when launching gromacs 2019.2 on GPU.

2019-10-22 Thread Szilárd Páll
Hi, Can you please file an issue on redmine.gromacs.org with the description you gave here? Thanks, -- Szilárd On Thu, Oct 17, 2019 at 5:01 PM Artem Shekhovtsov wrote: > Hello! > Problem: The launch of mdrun that does not require video cards exit with > fatal error if at least one video card

Re: [gmx-users] GROMACS showing error

2019-10-22 Thread Szilárd Páll
Hi, Please direct GROMACS usage questions to the users' list. Replying there, make sure you are subscribed and continue the conversation there. The issue is that you requested static library detection, but the hwloc library dependencies are not correctly added to the GROMACS link dependencies.

Re: [gmx-users] Question about default auto setting of mdrun -pin

2019-10-18 Thread Szilárd Páll
On Fri, Oct 18, 2019 at 4:36 PM wrote: > On Thu, Oct 17, 2019 at 10:34:39AM +, Kutzner, Carsten wrote: > > > > is it intended that the thread-MPI version of mdrun 2018 does pin to its > core > > if started with -nt 1 -pin auto? > No, I don't think that's intended. > > I think I have a

Re: [gmx-users] Help with a failing test - gromacs 2019.4 - Test 42

2019-10-16 Thread Szilárd Páll
Hi, The issue is an internal error triggered by the domain decomposition not liking 14 cores in your CPU which lead to a prime rank count. To ensure the tests pass I suggest trying to force only one device to be used in make check, e.g. CUDA_VISIBLE_DEVICES=0 make check; alternatively you can run

Re: [gmx-users] [Performance] poor performance with NV V100

2019-10-16 Thread Szilárd Páll
> http://manual.gromacs.org/documentation/2020-beta1/release-notes/2019/2019.4.html > > anyway, I will have a try on 2019.4 later. > > looking forward to check new feature which will be on 2/3 beta release of > 2020. > > Best regards, > Jimmy > > > Szilárd Páll 於 2019年10月8日

Re: [gmx-users] [Performance] poor performance with NV V100

2019-10-08 Thread Szilárd Páll
Hi, Can you please share your log files? we may be able to help with spotting performance issues or bottlenecks. However, note that for NVIDIA are the best source to aid you with reproducing their benchmark numbers, we Scaling across multiple GPUs requires some tuning of command line options,

Re: [gmx-users] SIMD options - detection program issue

2019-09-20 Thread Szilárd Páll
Hi, Good to know your system instability issues were resolved. (As a side-note you could have tried to use elrepo which has newer kernels for CentOS.) The SIMD detection should however not be failing; can you please file an issue on redmine.gromacs.org with cmake invocation, CMakeCache.txt and

Re: [gmx-users] Tesla GPUs: P40 or P100?

2019-09-19 Thread Szilárd Páll
Hi, I strongly recommend the Quadro RTX series, 6000 or 5000. These should not be a lot more expensive, but will be a lot faster than the Pascal generation cards. For comparisons see our recent paper: https://doi.org/10.1002/jcc.26011 Cheers, -- Szilárd On Thu, Sep 19, 2019, 09:50 Matteo

Re: [gmx-users] Fwd: SIMD options

2019-09-12 Thread Szilárd Páll
sig-email_content=webmail> > > Mail > > priva di virus. www.avast.com > > <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail> > > <#m_360888458910571867_m_-3568712886600561053_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Re: [gmx-users] Fwd: SIMD options

2019-09-12 Thread Szilárd Páll
_source=link_campaign=sig-email_content=webmail> > Mail > priva di virus. www.avast.com > <https://www.avast.com/sig-email?utm_medium=email_source=link_campaign=sig-email_content=webmail> > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > Il giorno gio 12 set 2019 alle o

Re: [gmx-users] Fwd: SIMD options

2019-09-12 Thread Szilárd Páll
On Thu, Sep 12, 2019 at 8:58 AM Stefano Guglielmo wrote: > > I apologize for the mistake, there was a typo in the object that could be > misleading, so I re-post with the correct object, > sorry. > > -- Forwarded message - > Da: Stefano Guglielmo > Date: mer 11 set 2019 alle ore

Re: [gmx-users] How can Build gromacs using MSVC on Win64 with AVX2?

2019-09-12 Thread Szilárd Páll
On Thu, Sep 12, 2019 at 9:29 AM Tatsuro MATSUOKA wrote: > > >Those are kernels for legacy code that never use such simd anywhere > Doy you mean that gmxSimdFlags.cmake is not used for simd detection ? gmxSimdFlags.cmake detects the _flags_ necessary for a SIMD build. It is gmxDetectSimd.cmake /

Re: [gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-10 Thread Szilárd Páll
Dear Tatsuro, Thanks for the contributions! Do the builds work out cleanly on cygwin? Are there any additional instructions we should consider including in our installation guide? Cheers, -- Szilárd On Fri, Sep 6, 2019 at 5:46 AM Tatsuro MATSUOKA wrote: > > I have prepared gromacs binaries

Re: [gmx-users] Cannot run short-ranged nonbonded interactions on a GPU

2019-09-09 Thread Szilárd Páll
Hi, What does the log file detection output contain? You might have linked against a CUDA release not compatible with your drivers (e.g. too recent). Cheers, -- Szilárd On Sun, Sep 8, 2019 at 5:17 PM Mahmood Naderan wrote: > > Hi > With the following config command > cmake .. -DGMX_GPU=on

Re: [gmx-users] simulation on 2 gpus

2019-09-06 Thread Szilárd Páll
sting: https://github.com/ComputationalRadiationPhysics/cuda_memtest and for memory stress testing: https://github.com/ComputationalRadiationPhysics/cuda_memtest Cheers, -- Szilárd > > Any opinion is appreciated, > > thanks > > Il giorno mercoledì 21 agosto 2019, Szilárd Páll &g

Re: [gmx-users] The problem of utilizing multiple GPU

2019-09-05 Thread Szilárd Páll
Hi, You have 2x Xeon Gold 6150 which is 2x 18 = 36 cores; Intel CPUs support 2 threads/core (HyperThreading), hence the 72. https://ark.intel.com/content/www/us/en/ark/products/120490/intel-xeon-gold-6150-processor-24-75m-cache-2-70-ghz.html You will not be able to scale efficiently over 8 GPUs

Re: [gmx-users] simulation on 2 gpus

2019-08-21 Thread Szilárd Páll
. If you compare the log files of the two, you should notice that the former used a pinstride 2 resulting in the use 28 cores while the latter using only 14 cores; the likely reason for only a small difference is that there is not enough CPU work to scale to 28 cores and additionally, these specific

Re: [gmx-users] gpu usage

2019-08-21 Thread Szilárd Páll
Hi Paul, Please post log files, otherwise we can only guess what is limiting the GPU utilization. Otherwise, you should be seeing considerably higher utilization in single-GPU no-decomposition runs. Cheers, -- Szilárd On Tue, Aug 20, 2019 at 7:01 PM p buscemi wrote: > > > Dear Users, > I am

Re: [gmx-users] AVX_512 and GROMACS

2019-08-20 Thread Szilárd Páll
On Mon, Aug 19, 2019 at 12:00 PM tarzan p wrote: > > Hi all.I have a dual socket Xeon GOLD 6148 which has the capabilities for > > Instruction Set Extensions Intel® SSE4.2, Intel® AVX, Intel® AVX2, Intel® > AVX-512 > but hen why si gromacs giving the error for AVX_512 but takes AVX2_256??? >

Re: [gmx-users] simulation on 2 gpus

2019-08-16 Thread Szilárd Páll
On Mon, Aug 5, 2019 at 5:00 PM Stefano Guglielmo wrote: > > Dear Paul, > thanks for suggestions. Following them I managed to run 91 ns/day for the > system I referred to in my previous post with the configuration: > gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 4 -ntmpi 7 -npme 1 -gputasks >

Re: [gmx-users] best performance on GPU

2019-08-12 Thread Szilárd Páll
Hi, You can get significantly better performance if you use a more recent GROMACS version (>=2018) to pick up the improvements to GPU acceleration (see https://onlinelibrary.wiley.com/doi/pdf/10.1002/jcc.26011 Fig 7, top group of bars), but 300 ns/day on a single machine is unlikely with your

Re: [gmx-users] Is it possible to control GPU utilizations when running two simulations in one workstation?

2019-08-12 Thread Szilárd Páll
Hi, I recommend that you use fewer MPI ranks and offload PME too manually (e.g. 4 ranks 3 PP one PME) -- see the manual and recent conversations on the list related to this topic. Depending on your system size consider launching two runs side-by-side. Cheers -- Szilárd On Sat, Aug 3, 2019 at

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-30 Thread Szilárd Páll
gt; > > wrote: > > > > > >> Hi Szilárd, > > >> To answer your questions: > > >> **are you trying to run multiple simulations concurrently on the same > > >> node or are you trying to strong-scale? > > >> I'm trying to run multip

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Szilárd Páll
x.com/s/7q249vbqqwf5r03/Archive.zip?dl=0. > > In short, alone.log -> single run in the node (using 1 gpu). > > multi1/2/3/4.log ->4 independent simulations ran at the same time in a > > single node. In all cases, 20 cpus are used. > > Best regards, > > Carlos > &

Re: [gmx-users] Sun Solaris

2019-07-25 Thread Szilárd Páll
On Thu, Jul 25, 2019 at 11:31 AM amitabh jayaswal wrote: > > Dear All, > *Namaskar!* > Can GROMACS be installed and run on a Sun Solaris system? Hi, As long as you have modern C++ compilers and toolchain, you should be able to do so. > We have a robust IBM Desktop which we intend to

Re: [gmx-users] remd error

2019-07-25 Thread Szilárd Páll
This is an MPI / job scheduler error: you are requesting 2 nodes with 20 processes per node (=40 total), but starting 80 ranks. -- Szilárd On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das <177cy500.bra...@nitk.edu.in> wrote: > > Hi, >I am running remd simulation in gromacs-2016.5. After

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-25 Thread Szilárd Páll
Hi, It is not clear to me how are you trying to set up your runs, so please provide some details: - are you trying to run multiple simulations concurrently on the same node or are you trying to strong-scale? - what are you simulating? - can you provide log files of the runs? Cheers, -- Szilárd

[gmx-users] older server CPUs with recent GPUs for GROMACS

2019-07-25 Thread Szilárd Páll
you want to scale across many GPUs. I hope that helps, let me know if you have any other questions! Cheers, -- Szilárd > Thanks again for the interesting information and practical advice on this > topic. > > Mike > > > > On Jul 18, 2019, at 2:21 AM, Szilárd Páll wrot

Re: [gmx-users] Need to install latest Gromacs on ios

2019-07-18 Thread Szilárd Páll
Hi, Are you sure you mean iOS not OS X? What does not work, an error message / cmake output would be more useful. cmake generally does detect your system C++ compiler if there is one. Cheers -- Szilárd On Thu, Jul 18, 2019 at 4:55 PM andrew goring wrote: > Hi, > > I need to install the

Re: [gmx-users] Install on Windows 10 with AMD GPU

2019-07-18 Thread Szilárd Páll
required dependencies. -- Szilárd > > -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd > Páll > Sent: Tuesday, 9 July 2019 10:46 PM > To: Discussion lis

Re: [gmx-users] decreased performance with free energy

2019-07-18 Thread Szilárd Páll
David, Yes, it is greatly affected. The standard interaction kernels are very fast, but the free energy kernels are known to not be as efficient as they could and the larger the fraction of atoms involved in perturbed interactions the more this work dominates the runtime. If you are trying to

Re: [gmx-users] make manual fails

2019-07-18 Thread Szilárd Páll
Is sphinx detected by cmake though? -- Szilárd On Wed, Jul 17, 2019 at 8:00 PM Michael Brunsteiner wrote: > hi,so I say:prompt> cmake .. -DGMX_BUILD_OWN_FFTW=ON > -DCMAKE_C_COMPILER=gcc-7 -DCMAKE_CXX_COMPILER=g++-7 -DGMX_GPU=on > -DCMAKE_INSTALL_PREFIX=/home/michael/local/gromacs-2019-3-bin >

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-18 Thread Szilárd Páll
16 cores and 60 PCIe lanes. Also note that, but more cores always win when the CPU performance matters and while 8 cores are generally sufficient, in some use-cases it may not be (like runs with free energy). -- Szilárd On Thu, Jul 18, 2019 at 10:08 AM Szilárd Páll wrote: > On Wed, Jul

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-18 Thread Szilárd Páll
> -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> On Behalf Of Szilárd > Páll > Sent: Wednesday, July 17, 2019 8:14 AM > To: Discussion list for GROMACS users > Subject: [**EXTERNAL*

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-17 Thread Szilárd Páll
o > Quadro 2080 Ti be a good choice? > > Again, thanks! > > Alex > > > On 7/16/2019 8:41 AM, Szilárd Páll wrote: > > Hi Alex, > > > > On Mon, Jul 15, 2019 at 8:53 PM Alex wrote: > >> Hi all and especially Szilard! > >> > >> My

Re: [gmx-users] decreased performance with free energy

2019-07-17 Thread Szilárd Páll
Hi, Lower performe especially with GPUs is not unexpected, but what you report is unusually large. I suggest you post your mdp and log file, perhaps there are some things to improve. -- Szilárd On Wed, Jul 17, 2019 at 3:47 PM David de Sancho wrote: > Hi all > I have been doing some testing

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 17, 2019 at 2:13 PM Stefano Guglielmo < stefano.guglie...@unito.it> wrote: > Hi Benson, > thanks for your answer and sorry for my delay: in the meantime I had to > restore the OS. I obviously re-installed NVIDIA driver (430.64) and CUDA > 10.1, I re-compiled Gromacs 2019.2 with the

Re: [gmx-users] rtx 2080 gpu

2019-07-17 Thread Szilárd Páll
On Wed, Jul 10, 2019 at 2:18 AM Stefano Guglielmo < stefano.guglie...@unito.it> wrote: > Dear all, > I have a centOS machine equipped with two RTX 2080 cards, with nvidia > drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the log > reported the following message: > > GROMACS

Re: [gmx-users] Xeon Gold + RTX 5000

2019-07-16 Thread Szilárd Páll
Hi Alex, On Mon, Jul 15, 2019 at 8:53 PM Alex wrote: > > Hi all and especially Szilard! > > My glorious management asked me to post this here. One of our group > members, an ex-NAMD guy, wants to use Gromacs for biophysics and the > following basics have been spec'ed for him: > > CPU: Xeon Gold

Re: [gmx-users] GPU support on macOS 10.14

2019-07-15 Thread Szilárd Páll
PS: have you tried mixed mode PME (-pmefft)? That could avoid the Apple OpenCL with clFFT issue. -- Szilárd On Mon, Jul 15, 2019 at 4:39 PM Szilárd Páll wrote: > > Hi, > > Thanks for the detailed report. Unfortunately, it seems that there is > indeed an Apple OpenCL compiler i

Re: [gmx-users] GPU support on macOS 10.14

2019-07-15 Thread Szilárd Páll
Hi, Thanks for the detailed report. Unfortunately, it seems that there is indeed an Apple OpenCL compiler issue with clFFT as you observe, but I am not convinced this is limited to AMD GPUs. I do not believe you are getting PME offload with the Intel iGPU: PME offload support is disabled on Intel

Re: [gmx-users] Install on Windows 10 with AMD GPU

2019-07-09 Thread Szilárd Páll
argv) > { > (void)argv; > #ifndef CL_VERSION_1_0 > return ((int*)(_VERSION_1_0))[argc]; > #else > (void)argc; > return 0; > #endif > } > > > Guessing it is time to give up > > Cheers > James > > > > > -Original Message- > F

Re: [gmx-users] Install on Windows 10 with AMD GPU

2019-07-05 Thread Szilárd Páll
Dear James, Unfortunately, we have very little experience with OpenCL on Windows, so I am afraid I can not advise you on specifics. However, note that the only part of the former SDK that is needed is the OpenCL headers and loader libraries (libOpenCL) which is open source software that can be

Re: [gmx-users] using GPU acceleration in gromacs

2019-05-27 Thread Szilárd Páll
On Fri, May 24, 2019 at 2:37 PM Pragati Sharma wrote: > Thanks. Is there any way to check which release of GROMACS supports which > architecture of NVIDIA GPUs. > > On Fri, May 24, 2019 at 4:48 PM Szilárd Páll > wrote: > > > Note that if it is indeed a Quadro

Re: [gmx-users] using GPU acceleration in gromacs

2019-05-24 Thread Szilárd Páll
Note that if it is indeed a Quadro 5000 (not P5000, M5000, or K5000) you will need an older GROMACS release as the architecture of that GPUs has been deprecated. -- Szilárd On Fri, May 24, 2019 at 8:59 AM Pragati Sharma wrote: > Hello users, > > I am trying to install gromacs-2019 on a HP

Re: [gmx-users] implicit water simulation, Gromacs5.x, point decomposition

2019-05-21 Thread Szilárd Páll
As far as I recall at most 2 ranks were supported; use OpenMP, I suggest. -- Szilárd On Sat, May 11, 2019 at 3:11 PM Halima Mouhib wrote: > Hi, > I have a question on how to run implicit water simulations using the > Gromacs5.x series. > Unfortunately, there is a problem with the domain

Re: [gmx-users] NTMPI / NTOMP combination: 10 threads not "reasonable" for GROMACS?

2019-05-10 Thread Szilárd Páll
That is just a hint, if you measured and you are getting better performance with 10 threads, use that setting. (Also note that the message suggests "4 to 6" rather than "4 or 6" threads). Do you also get the same note with the 2019 release? -- Szilárd On Fri, May 10, 2019 at 6:27 PM Téletchéa

Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-09 Thread Szilárd Páll
On Thu, May 9, 2019 at 10:01 PM Alex wrote: > Okay, we're positively unable to run a Gromacs (2019.1) test on Power9. The > test procedure is simple, using slurm: > 1. Request an interactive session: > srun -N 1 -n 20 --pty > --partition=debug --time=1:00:00 --gres=gpu:1 bash > 2. Load CUDA

Re: [gmx-users] gmx mdrun with gpu

2019-05-06 Thread Szilárd Páll
Share a log file please so we can see the hardware detected, command line options, etc. -- Szilárd On Sun, May 5, 2019 at 3:53 AM Maryam wrote: > Hello Reza > Yes I complied it with GPU and the version of CUDA is 9.1. Any suggestions? > Thanks. > > On Sat., May 4, 2019, 1:45 a.m. Reza

Re: [gmx-users] Gromacs 2019.2 on Power9 + Volta GPUs (building and running)

2019-05-02 Thread Szilárd Páll
Power9 (for HPC) is 4-way SMT, so make sure to try 1,2, and 4 threads per core (stride 4, 2, and 1 respectively). Especially if you are offloading all force computing to the GPU, what remains on the couch may not be able to benefit from more than 1-2 threads per core. -- Szilárd On Thu, May 2,

Re: [gmx-users] Failed tests, need help in troubleshooting

2019-05-01 Thread Szilárd Páll
ogs/master/make_2019-04-23.log > > make check logs: > > https://raw.githubusercontent.com/circumflex-cf/logs/master/makecheck_2019-04-23.log > > Also here is one the regression tests that failed. > https://github.com/circumflex-cf/logs/tree/master/orientation-restraints > &g

Re: [gmx-users] 2019.2 build warnings

2019-05-01 Thread Szilárd Páll
tar.gz?dl=0 > > If you can help us figure this out, it will be great! > > Thanks, > > Alex > > On 4/29/2019 4:25 AM, Szilárd Páll wrote: > > Hi, > > > > I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests > are > > do

Re: [gmx-users] 2019.2 build warnings

2019-04-30 Thread Szilárd Páll
egression_complex_2019.2.tar.gz?dl=0 > > If you can help us figure this out, it will be great! > > Thanks, > > Alex > > On 4/29/2019 4:25 AM, Szilárd Páll wrote: > > Hi, > > > > I assume you used -DREGRESSIONTEST_DOWNLOAD=ON case in which the tests > are &

Re: [gmx-users] 2019.2 build warnings

2019-04-29 Thread Szilárd Páll
local > gromacs build directory? I mean, I could make the entire directory a > tarball, but not sure it's all that necessary. I don't remember which tests > failed, unfortunately... > > Thank you! > > Alex > > On 4/25/2019 2:54 AM, Szilárd Páll wrote: > > Hi Alex, >

Re: [gmx-users] clFFT error on iMAC 2017, Gromacs 2019.2, Intel Core i5, GPU Radeon Pro 555 2GB

2019-04-25 Thread Szilárd Páll
Hi, That unfortunately looks like Apple's OpenCL not playing well with the clFFT OpenCL library. Avoiding offloading PME to the GPU will allow using GPU acceleration, I think. Can you please try to run a simulation manually and pass "-pme cpu" on the command line? Can you also please file a

Re: [gmx-users] 2019.2 build warnings

2019-04-25 Thread Szilárd Páll
le. > > Distributor ID: Ubuntu > > Description:Ubuntu 16.04.6 LTS > > Release:16.04 > > Codename: xenial > > Ubuntu GLIBC 2.23-0ubuntu11 > > > On 4/24/2019 4:57 AM, Szilárd Páll wrote: > > What OS are you using? There are some known iss

Re: [gmx-users] 2019.2 build warnings

2019-04-24 Thread Szilárd Páll
What OS are you using? There are some known issues with the Ubuntu 18.04 + glibc 2.27 which could explain the errors. -- Szilárd On Wed, Apr 17, 2019 at 2:32 AM Alex wrote: > Okay, more interesting things are happening. > At the end of 'make' I get a bunch of things like > > ..

Re: [gmx-users] 2019.2 build warnings

2019-04-24 Thread Szilárd Páll
The warning are harmless, something happened in the build infrastructure which emits some new warnings that we've not caught before the release. -- Szilárd On Wed, Apr 17, 2019 at 1:43 AM Alex wrote: > Hi all, > > I am building the 2019.2 version, latest CUDA libs (older 2018 version > works

Re: [gmx-users] Gromacs Benchmarks for NVIDIA GeForce RTX 2080

2019-04-24 Thread Szilárd Páll
The benchmark systems are the ones commonly used in GROMACS performance evaluation ADH is a 90k/134k system (dodec/cubic) and RNAse is 19k/24k (dodec/cubic) both setup up with AMBER FF standard setting (referenced can be found on this admittedly dated page: http://www.gromacs.org/GPU_acceleration)

Re: [gmx-users] Failed tests, need help in troubleshooting

2019-04-23 Thread Szilárd Páll
umf...@disroot.org> wrote: > Hello Szilárd, > > Do you mean log files created in each regression test? > > On 23/04/19 3:43 PM, Szilárd Páll wrote: > > What is the hardware you are running this on? Can you share a log file, > > please? > > -- > > Szilárd > &

Re: [gmx-users] Failed tests, need help in troubleshooting

2019-04-23 Thread Szilárd Páll
What is the hardware you are running this on? Can you share a log file, please? -- Szilárd On Mon, Apr 22, 2019 at 9:24 AM Cameron Fletcher (CF) < circumf...@disroot.org> wrote: > Hello, > > I have installed gromacs 2019.1 on CentOS 7.6 . > While running regressions tests 2019.1 certain tests

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-04-10 Thread Szilárd Páll
d CUDA 10.1 (which is > required for CUDA support of gcc 8), which seemed to fix the problem in > that case. > > If you still have problems we can look at this some more. > > Jon > > -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < >

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-04-09 Thread Szilárd Páll
Hi, One more test I realized it may be relevant considering that we had a similar report earlier this year on similar CPU hardware: can you please compile with -DGMX_SIMD=AVX2_256 and rerun the tests? -- Szilárd On Tue, Apr 9, 2019 at 8:35 PM Szilárd Páll wrote: > Dear Stefanie, > &g

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-04-09 Thread Szilárd Páll
hricht- > Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto: > gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von Szilárd > Páll > Gesendet: Freitag, 29. März 2019 01:24 > An: Discussion list for GROMACS users > Betreff: Re: [gmx-users] WG: WG: Issu

Re: [gmx-users] Installation with CUDA on Debian / gcc 6+

2019-04-01 Thread Szilárd Páll
On Mon, Apr 1, 2019 at 5:08 PM Jochen Hub wrote: > Hi Åke, > > ah, thanks, we had indeed a CUDA 8.0 on our Debian. So we'll try to > install CUA 10.1. > > But as a side question: Doesn't the supported gcc version strongly > depend on the Linux distribution, see here: > >

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-29 Thread Szilárd Páll
it-service.zae-bayern.de/Team/index.php/s/mMyt3MPEfRrn8Ge > > > Thanks again for all your support and fingers crossed! > > Best wishes, > Steffi > > > > > > -----Ursprüngliche Nachricht- > Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto: &

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-27 Thread Szilárd Páll
will shed some light on what the issue is. Cheers, -- Szilard Again, a lot of thank for your support. > Best wishes, > Steffi > > > > > > > > > > > -Ursprüngliche Nachricht- > Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto: &

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-26 Thread Szilárd Páll
t; 'CMakeFiles/check.dir/rule' failed > make[1]: *** [CMakeFiles/check.dir/rule] Error 2 > Makefile:626: recipe for target 'check' failed > make: *** [check] Error 2 > > Many thanks again. > Best wishes, > Steffi > > > > > > -Ursprüng

  1   2   3   4   5   6   7   >