Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
Hi, What is the reason for using the custom CMake options? What's the -rdc=true for -- I don't think it's needed and it can very well be causing the issue. Have you tried to actually do an as-vanilla-as-possible build? -- Szilárd On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B ERDC-RDE-

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-06 Thread Szilárd Páll
ble/xeon-scalable-spec-update.html [5] https://www.microway.com/knowledge-center-articles/detailed-specifications-of-the-skylake-sp-intel-xeon-processor-scalable-family-cpus/ -- Szilárd On Thu, Apr 5, 2018 at 10:22 AM, Jochen Hub wrote: > > > Am 03.04.18 um 19:03 schrieb Szilárd Páll:

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
61_cpp1_ii_70828085' > ../../lib/libgromacs.so.3.1.0: undefined reference to > `__cudaRegisterLinkedBinary_57_tmpxft_a2d0__21_pme_gather_compute_61_cpp1_ii_a7a2f9c7' > ../../lib/libgromacs.so.3.1.0: undefined reference to > `__cudaRegisterLinkedBinary_67_tmpxft_9f4e__21_nbnxn_cu

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
which are needed? -- Szilárd On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll wrote: > I think the fpic errors can't be caused by missing rdc=true because > the latter refers to the GPU _device_ code, but GROMACS does not need > relocatable device code, so that should not be necessary.

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, April 06, 2018 12:05 PM > To: Discussion list for GROMACS users > Cc: gromacs.org_gmx-users@maillist.sys.kth.se > Subj

Re: [gmx-users] GROMACS 2018 MDRun: Multiple Ranks/GPU Issue

2018-04-07 Thread Szilárd Páll
Your GPUs are in process exclusive mode which makes it impossible for multiple ranks to use a GPU; see nvidia-smi --compute-mode option. -- Szilárd On Fri, Apr 6, 2018 at 10:14 PM, Hollingsworth, Bobby wrote: > Hello all, > > I'm tuning MDrun on a node with 24 intel skylake cores (2X12) and 2 V

Re: [gmx-users] Problem with CUDA

2018-04-07 Thread Szilárd Páll
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, April 06, 2018 2:40 PM > To: Discussion list for GROMACS users > Cc: gromacs.org_gmx-users@maillist.sys.kth.se > Subject: Re: [gmx-

Re: [gmx-users] strange GPU load distribution

2018-04-27 Thread Szilárd Páll
The second column is PIDs so there is a whole lot more going on there than just a single simulation, single rank using two GPUs. That would be one PID and two entries for the two GPUs. Are you sure you're not running other processes? -- Szilárd On Thu, Apr 26, 2018 at 5:52 AM, Alex wrote: > Hi

Re: [gmx-users] strange GPU load distribution

2018-05-07 Thread Szilárd Páll
Hi, You have at least one option more elegant than using a separate binary for EM. Set GMX_DISABLE_GPU_DETECTION=1 environment variable which is the internal GROMACS override that forces the detection off for cases similar to yours. That should solve the detection latency. If for some reason it d

Re: [gmx-users] Install Gromacs Cuda MacBook pro with eGPU OS X 10.13.4

2018-05-31 Thread Szilárd Páll
Hi, The issue is a limitation imposed by the CUDA toolkit; you could try using the native clang support by passing GMX_CLANG_CUDA=ON to cmake. Also note that: - I might not remember correctly the reviews, but isn't NVIDIA officially unsupported in eGPUs (might still work, but beware)? - you only

Re: [gmx-users] how to improve computing?

2018-06-14 Thread Szilárd Páll
Vlad, Please share your log file(s), it's easier to be concrete about a concrete case. mdrun will by default attempt to use all resources available, unless there are some hard limitations that prevent this (e.g. very small number of atoms can't be decomposed) or there is a reason to believe that

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-15 Thread Szilárd Páll
Hi, Regarding the K40 vs GTX 960 question, the K40 will likely be a bit faster (though it'l consume more power if that matters). The difference will be at most 20% in total performance, I think -- and with small systems likely negligible (as a smaller card with higher clocks is more efficient at s

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-18 Thread Szilárd Páll
to be ideal, but without seeing a log file, it's hard to tell.. The result is a solid increase in performance on a small-ish system (20K > atoms): 90 ns/day instead of 65-70. I don't use this box for anything > except prototyping, but still the swap + tweaks were pretty useful. &

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-21 Thread Szilárd Páll
--- > Total 48578.997 544107.322 100.0 > > - > (*) Note that with separate PME ranks, the walltime column actually sums

Re: [gmx-users] GROMACS- suggestion for GPU buying

2018-07-12 Thread Szilárd Páll
If price does not matter, get V100s; if it matters somewhat get TITAN-V's. Same applies if you want best performance per simulation. If you want best perf/buck, the 1080 Ti is still better investment (or you could wait an see if the next-gen consumer cards come out soon). -- Szilárd On Tue, Jul

Re: [gmx-users] mpirun and gmx_mpi

2018-07-12 Thread Szilárd Páll
If you've written programs using MPI before you must know that you generally use a launcher like mpirun/mpiexec or similar to start a run (e.g. see https://www.open-mpi.org/doc/v1.10/man1/mpirun.1.php). What is the confusion? -- Szilárd On Wed, Jul 11, 2018 at 9:16 AM Mahmood Naderan wrote: >

Re: [gmx-users] cpu threads in a gpu run

2018-07-12 Thread Szilárd Páll
On Mon, Jul 9, 2018 at 12:43 PM Mahmood Naderan wrote: > Hi, > When I run "-nt 16 -nb cpu", I see nearly 1600% cpu utilization. You request the CPU to do the work, so all cores are fully utilized. > However, when I run "-nt 16 -nb gpu", I see about 600% cpu utilization. The default behavior

Re: [gmx-users] cpu threads in a gpu run

2018-07-12 Thread Szilárd Páll
On Tue, Jul 10, 2018 at 5:12 AM Mahmood Naderan wrote: > No idea? It seems to be odd. At the beginning of run, I see > > NOTE: GROMACS was configured without NVML support hence it can not exploit > application clocks of the detected Quadro M2000 GPU to improve > performance. > Recompi

Re: [gmx-users] pme grid with gpu

2018-07-12 Thread Szilárd Páll
That's the PP-PME load balancing output (see -tunepme option / http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html ). -- Szilárd On Tue, Jul 10, 2018 at 7:35 PM Mahmood Naderan wrote: > Hi, > When I run mdrun with "-nb gpu", I see the following output > > > starting

Re: [gmx-users] rerun from random seeds

2018-07-12 Thread Szilárd Páll
Yes, this will generate velocities with a random seed. -- Szilárd On Thu, Jul 5, 2018 at 5:15 PM MD wrote: > Hi Gromacs folks, > > I am trying to re-run a 100 ns simulation with different velocities and > random seed. Would the setup from the mdp file as the following a good one? > > ; Velocity

Re: [gmx-users] Problems during installation

2018-07-19 Thread Szilárd Páll
On Mon, Jul 16, 2018 at 7:43 PM Rajat Desikan wrote: > Hi Mark, > > Thank you for the quick answer. My group is experimenting with a GPU-heavy > processor-light configuration similar to the Amber machines available from > Exxact (https://www.exxactcorp.com/AMBER-Certified-MD-Systems). In our > un

Re: [gmx-users] mpirun and gmx_mpi

2018-07-24 Thread Szilárd Páll
On Tue, Jul 24, 2018 at 3:13 PM Mahmood Naderan wrote: > No idea? Those who use GPU, which command do they use? gmx or gmx_mpi? > > That choice depends on whether you want to run across multiple compute nodes; the former can not while the latter, as it is (by default) indicates that it's using an

Re: [gmx-users] mpirun and gmx_mpi

2018-07-25 Thread Szilárd Páll
ngle GPU device which is shared > between ranks. > If you use MPS on the compute nodes, it will use MPS. If you don't, processes will share GPUs, but execution will be somewhat less efficient. -- Szilárd > > > Regards, > Mahmood > > > On Wednesday, July 25, 2018, 1:

Re: [gmx-users] OpenMP multithreading mdrun

2018-07-25 Thread Szilárd Páll
On Wed, Jul 25, 2018 at 4:27 PM Smith, Iris wrote: > Good Morning Gromacs Users, > > Gromacs 2018.2 was recently installed on our intel-based HPC cluster using > OpenMP (without MPI). > > Can I still utilize multiple threads within one node when utilizing mdrun? > If so, do I need to call openmp

Re: [gmx-users] Too few cells to run on multiple cores

2018-08-07 Thread Szilárd Páll
Hi, The domain decomposition has certain algorithmic limits that you can relax, but as you notice that comes at the cost of deteriorating load balance -- and at a certain point it might come at the cost of simulations aborting mid-run (if you make -rdd too large). More load imbalance does not nece

Re: [gmx-users] Issue with regression test.

2018-08-07 Thread Szilárd Páll
Hi, Can you share the directory of the failed test, i.e. regressiontests/complex/nbnxn_vsite? Can you check running the regressiontests manually using 1/2/4 ranks, e.g. perl gmxtest.pl complex -nt 1 -- Szilárd On Wed, Aug 1, 2018 at 12:49 PM Raymond Arter wrote: > Dear All, > > I'm building G

Re: [gmx-users] gromacs with mps

2018-08-07 Thread Szilárd Páll
Hi, It does sound like a CUDA/MPS setup issue, GROMACS uses relatively small amount of GPU memory, so unless you are using a very skinny GPU or a very large input, it's most likely not a GROMACS issue. BTW, have you made sure that your GPUs are not in process-exclusive mode? Cheers, -- Szilárd

Re: [gmx-users] GMXRC removes trailing colon from existing MANPATH

2018-08-07 Thread Szilárd Páll
Hi, Can you please submit your change to gerrit.gromacs.org -- and perhaps it's best if you also file an issue on redmine.gromacs.org with your brief description you posted here? Thanks, -- Szilárd On Fri, Jul 27, 2018 at 2:28 PM Peter Kroon wrote: > Hi all, > > I noticed that sourcing GMXRC

Re: [gmx-users] NVIDIA CUDA Alanine Scanning

2018-08-07 Thread Szilárd Páll
Hi, Yes, you can use CUDA acceleration, and FEP does work, try to keep feature-parity between the GPU-accelerated and non-accelerated modes of GROMACS. Can't comment in depth about GMXPBSA, they may not have full support for newer releases from what a brief look at their mailing list shows. Regard

Re: [gmx-users] Errors while trying to install GROMACS-2018

2018-08-09 Thread Szilárd Páll
Hi, This is likely an issue with the combination of gcc and CUDA versions you are using. What are these versions? Can you install the latest CUDA (or at least recent) and see if it solves the issue? Cheers, -- Szilárd On Wed, Aug 8, 2018 at 8:00 PM Lovuit CHEN wrote: > Hi everyone, > > > I go

Re: [gmx-users] high load imbalance in GMX5.1 using MARTINI FF

2018-08-09 Thread Szilárd Páll
Linda, This should indeed normally not happen, but before diving deep into the issue I'd suggest testing more recent releases of GROMACS, preferably 2018.2 so we know if there is an issue in the currently actively supported release. Secondly, load imbalance is not necessarily a bad thing if it do

Re: [gmx-users] [Install error] What can I do this situation?

2018-08-20 Thread Szilárd Páll
On Mon, Aug 20, 2018 at 2:54 PM 김나연 wrote: > There is error messages. TT > 1) What can I do?? > Install manually. The installation warnings suggest that you are risking strange failures (due to mixing C++ libraries). > 2) If this process succeeds, can I do GPU calculations in mdrun? > Yes, if

Re: [gmx-users] Feedback wanted - mdp option for preparation vs production

2018-08-24 Thread Szilárd Páll
The thermostat choice is an easy example where there is a clear case to be made for the proposed mdp option, but what other uses are these for such an option? Unless there are at least a few, I'd say it's better to improve the UI messages, option documentation, manual, etc. than introduce a ~ singl

Re: [gmx-users] Feedback wanted - mdp option for preparation vs production

2018-08-24 Thread Szilárd Páll
work (particularly in scripted > workflows). > > Mark > > On Fri, Aug 24, 2018 at 4:11 PM Szilárd Páll > wrote: > > > The thermostat choice is an easy example where there is a clear case to > be > > made for the proposed mdp option, but what other uses are

Re: [gmx-users] GMXRC removes trailing colon from existing MANPATH

2018-08-27 Thread Szilárd Páll
than > simply appending a colon in my bashrc. > The MANPATH is generated at build-time from the scripts/GMXRC.*.cmakein input files the result of which in turn get installed. These files need the one-liner fix ;) Cheers, -- Szilard > Peter > > > On 07-08-18 14:45, Szilárd Páll

Re: [gmx-users] Heterogeneous GPU cluster question?

2018-08-29 Thread Szilárd Páll
Hi, You can use multiple types of GPUs in a single run, but it won't be ideal. Also, with Volta GPUs you'll probably be better off also offloading PME which won't scale to more than 2-3 GPUs, so probably you'll not want to use more than 2 GPUs in run with Volta. -- Szilárd On Tue, Aug 28, 2018

Re: [gmx-users] Workstation choice

2018-09-07 Thread Szilárd Páll
Hi, Are you intending to use it mostly/only for running simulations or also as a desktop computer? Starting with the 2018 release we offload more work to the GPU to account for the increase in the gap between the performance of CPUs and GPUs and the prevalence of (especially workstations) with >1

Re: [gmx-users] Workstation choice

2018-09-07 Thread Szilárd Páll
On Fri, Sep 7, 2018 at 4:15 PM Benson Muite wrote: > Check if the routines you will use have been ported to use GPUs. Time > and profile a typical run you will perform on your current hardware to > determine the bottlenecks, and then choose hardware that will perform > best on these bottlenecks a

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
tion (DD) incurs a "one-time performance hit" due to the additional work involved in decomposing the system. 2018-09-07 23:25 GMT+07:00 Szilárd Páll : > > > > > Are you intending to use it mostly/only for running simulations or also > as > > a desktop computer? >

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
Sadly, I can't recommend packaged versions of GROMACS for anything other than pre- or post-processing or non-performance critical work; these are compiled with proper SIMD support which is generally wasteful. Also, I can't (yet) recommend AMD GPUs as a buying option for consumer-grade stuff as we

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
gt; It’s a bit confusing that in synthetic tests/games performance of i7 8700 > > is higher than Ryzen 7 2700. > > ... > > Sorry for jumping into the thread at this point, but depending on > the problem size and type, it might happen that: > >- a single R2-2700X possibly

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
BTW, I'd recommend caution when using the dated d.dppc benchmark for drawing performance conclusions both because it may not be too representative of other workloads (small size, peculiar settings) and because it uses all-bonds constrained with 2fs time step which is not recommended these days (unl

Re: [gmx-users] Make check failed 2018 Gromacs on GPU workstation

2018-09-13 Thread Szilárd Páll
Test timeouts are strange, Is the machine you're running on busy with other jobs? Regarding the regressiontest failure, can you share tests/regressiontests*/complex/octahedron/mdrun.out please? -- Szilárd On Thu, Sep 13, 2018 at 8:49 PM Phuong Tran wrote: > Hi all, > > I have been trying to i

Re: [gmx-users] Make check failed 2018 Gromacs on GPU workstation

2018-09-18 Thread Szilárd Páll
assignment, number of ranks, or your use of the > -nb, > -pme, and -npme options, perhaps after measuring the performance you can > get. > > On Thu, Sep 13, 2018 at 4:11 PM Szilárd Páll > wrote: > > > Test timeouts are strange, Is the machine you're running

Re: [gmx-users] minor issues of gmxtest.pl on mdrun-only builds

2018-09-26 Thread Szilárd Páll
Thanks for the report. Could you please file a redmine issue on redmine.gromacs.org. Would you consider uploading a fix to our code review site too? -- Szilárd On Wed, Sep 26, 2018 at 9:02 PM LAM, Tsz Nok wrote: > Dear all, > > > When I tested my mdrun-only build with MPI support (after also bu

Re: [gmx-users] AMD vs Intel, nvidia 20xx

2018-09-26 Thread Szilárd Páll
On Wed, Sep 19, 2018 at 11:58 AM Tamas Hegedus wrote: > Hi, > > I am planning to buy 1 or 2 GPU workstations/servers for stand alone > gromacs and gromacs+plumed (w plumed I can efficiently use only 1 GPU). > I would like to get some suggestions for the best performance/price > ratio. More specif

Re: [gmx-users] Computational load of Constraints/COM pull force

2018-09-28 Thread Szilárd Páll
Hi, The issue you are running seems to be caused by the significant load imbalance in the simulation that sometimes throws the load balancing off -- it's something I've seen before (and I thought that we solved it). The system is "tall" and most likely has a significant inhomogeneity along Z. mdru

Re: [gmx-users] FW: v2018.3; GPU not recognised

2018-10-04 Thread Szilárd Páll
On Thu, Oct 4, 2018 at 5:36 PM Tresadern, Gary [RNDBE] wrote: > Hi, > We are trying to build a simple workstation installation of v2018.3 that > will run with GPU support. > The build and test seems to go without errors, but when we test run new > jobs we see the GPU is not being recognized, NOTE

Re: [gmx-users] Computational load of Constraints/COM pull force

2018-10-09 Thread Szilárd Páll
/ufile.io/7bjcq > > dlb auto: https://ufile.io/4qlc6 > > dlb on: https://ufile.io/n9rme > > > I hope this makes some sense to someone! > > > Kind regards, > > > Kenneth > > > Van: gromacs.org_gmx-users-boun...@maillist.sys.k

Re: [gmx-users] And Ryzen 8 core/16 thread use

2018-10-29 Thread Szilárd Páll
Sure, if you use the recent releases with PME offload, you might even be able to drive two GPUs with it. I also do not know of cooling issues, is there anything specific you are referring to? -- Szilárd On Thu, Oct 25, 2018 at 5:27 PM paul buscemi wrote: > > Dear Users, > > Does anyone have con

Re: [gmx-users] gmx kill computer?

2018-10-29 Thread Szilárd Páll
On Tue, Oct 23, 2018 at 4:04 PM Wahab Mirco < mirco.wa...@chemie.tu-freiberg.de> wrote: > On 23.10.2018 15:12, Michael Brunsteiner wrote: > > the computers are NOT overclocked, cooling works, cpu temperatures are > well below max. > > as stated above something like this happened three times, each

Re: [gmx-users] Gromacs 2018.3 with CUDA - segmentation fault (core dumped)

2018-11-06 Thread Szilárd Páll
Did it really crash after exactly the same number of steps the second time too? -- Szilárd On Tue, Nov 6, 2018 at 10:55 AM Krzysztof Kolman wrote: > Dear Gromacs Users, > > I just wanted to add an additional information. After doing restart, the > simulation crashed (again segmentation fault) a

Re: [gmx-users] Gromacs 2018.3 with CUDA - segmentation fault (core dumped)

2018-11-15 Thread Szilárd Páll
That suggest there is an issue related to the CUDA FFT library -- or something else indirectly related. Can you use a newer CUDA and try to see if with -pmefft gpu you are still getting a crash? -- Szilárd On Mon, Nov 12, 2018, 11:58 AM Krzysztof Kolman > > > Dear Benson and Szilard, > > > > Tha

Re: [gmx-users] generic hardware assembling for gromacs simulation

2018-11-20 Thread Szilárd Páll
On Tue, Nov 20, 2018 at 7:12 AM Seketoulie Keretsu wrote: > Dear Benson, > > Thank you for answering . > I am using Centos 6. My current simulation time for protein-ligand > systems is about 1.6 ns/day. I am wondering if installing the GTX 1050 > or GTX 970 can boost the output significantly (may

Re: [gmx-users] mpirun problem

2018-11-20 Thread Szilárd Páll
Hi, the most likely issue is that your GROMACS installation is not configured for the correct architecture; e.g. if the compute or login node that the cmake configure/compilation was done on is newer than the one that's running on, mdrun will try to issue instructions that are not supported by the

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-07 Thread Szilárd Páll
Hi Jaime, Have you tried passing that variable to nvcc? Does it not work? Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels and transfers) per iteration with iteration times in the range of milliseconds at longest and peak in the hundreds of nanoseconds and the CPU needs to syn

Re: [gmx-users] mdrun-adjusted cutoffs?!

2018-12-07 Thread Szilárd Páll
BTW if you have doubts and still want to make sure that the mdrun PME tuning does not affect your observables, you can always do a few runs with a fixed rcoulomb > rvdw set in the mdp file (with -notunepme passed on the command line for consistency) and compare what you get with the rcoulomb = rvdw

Re: [gmx-users] mdrun-adjusted cutoffs?!

2018-12-10 Thread Szilárd Páll
r iso-accurate comparisons, one must also scale the > > > Fourier grid by the same factor (per the manual section on PME > > autotuning). > > > Of course, if you start from the smallest rcoulomb and use a fixed grid, > > > then the comparisons will be of increasing ac

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-10 Thread Szilárd Páll
=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/ > >> $ make -j $(nproc) > >> $ make install > >> $ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx > >> > >> that works until GROMACS 2016, I couldn't make it work for GROMACS 20

Re: [gmx-users] using dual CPU's

2018-12-11 Thread Szilárd Páll
Without having read all details (partly due to the hard to read log files), what I can certainly recommend is: unless you really need to, avoid running single simulations with only a few 10s of thousands of atoms across multiple GPUs. You'll be _much_ better off using your limited resources by runn

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-11 Thread Szilárd Páll
ces between the 2016 & 2018 version. > > I'm using Cmake 3.13.1. > > ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON > -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON > -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0 > -DCMAKE_INSTALL_PREFIX

Re: [gmx-users] gromacs 2018 with OpenMPI + OpenMP

2018-12-12 Thread Szilárd Páll
On Wed, Dec 12, 2018 at 12:14 PM Deepak Porwal wrote: > > Hi > I build gromacs with OpenMPI + OpenMP. > When I am trying to run adh/adh_dodec workload with binding the MPI threads > to core/l3cache, I am seeing some warnings. > Command I used to run: mpirun --map-by ppr:1:l3cache:pe=2 -x > OMP_N

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Szilárd Páll
BS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so > (0x7fb57b2b4000) > /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000 > > > > IDK what i'm doing wrong. You asked for dynamic linking against the CUDA runtime and you

Re: [gmx-users] gromacs 2018 with OpenMPI + OpenMP

2018-12-17 Thread Szilárd Páll
ly I'm curious, can you elaborate what you mean? mdrun binds its threads quite "properly" with "-pin on" > we will see much better performance than thread > MPI. > thread pinning is unrelated to thread-MPI, mdrun can pin its threads with regular MPI too. --

Re: [gmx-users] using dual CPU's

2018-12-17 Thread Szilárd Páll
or 90k atoms ( with 2x GTX 970 ) What bothered me in my > initial attempts was that my simulations became slower by adding the second > GPU - it was frustrating to say the least > > I’ll give your suggestions a good workout, and report on the results when > I hack it out.. >

Re: [gmx-users] using dual CPU's

2018-12-17 Thread Szilárd Páll
On Thu, Dec 13, 2018 at 8:39 PM p buscemi wrote: > Carsten > > thanks for the suggestion. > Is it necessary to use the MPI version for gromacs when using multdir? - > now have the single node version loaded. > Yes. > I'm hammering out the first 2080ti with the 32 core AMD. results are not > st

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-18 Thread Szilárd Páll
and my environment variables are unset· > > Regards, > Jaime. > > > El jue., 13 dic. 2018 a las 18:27, Szilárd Páll () > escribió: > > > On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra wrote: > > > > > > My cmake config: > > > > >

Re: [gmx-users] different nvidia-smi/gmx GPU_IDs

2019-01-18 Thread Szilárd Páll
Hi, The CUDA runtime tries (and AFAIK has always tried) to be smart about device order which is what GROMACS will see in its detection. The nvidia-smi monitoring tools however uses a different mechanism for enumeration that will always respect the PCI identifier of the devices (~ the order of card

Re: [gmx-users] gmx 2019 performance issues

2019-01-18 Thread Szilárd Páll
On Tue, Jan 15, 2019 at 1:30 PM Tamas Hegedus wrote: > Hi, > > I do not really see an increased performance with gmx 2019 using -bonded > gpu. I do not see what I miss or misunderstand. > The only thing I see that all cpu run at ~100% with gmx2018, while some > of the cpus run only at ~60% with g

Re: [gmx-users] delay at start

2019-01-23 Thread Szilárd Páll
Hi, That doesn't sound normal. Please share some log files and inputs if you can. Cheers, -- Szilárd On Tue, Jan 22, 2019 at 5:29 PM Michael Brunsteiner wrote: > hi, > i notice that gromacs, when i start an MD simulation usuallyspends up to a > few minutes using only one (out of several possi

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-31 Thread Szilárd Páll
On Wed, Jan 30, 2019 at 7:37 AM Владимир Богданов < bogdanov-vladi...@yandex.ru> wrote: > Hey everyone! > > I need help, please. When I try to run MD with GPU I get the next error: > > Command line: > > gmx_mpi mdrun -deffnm md -nb auto > > > > Back Off! I just backed up md.log to ./#md >

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-31 Thread Szilárd Páll
On Wed, Jan 30, 2019 at 4:56 PM Владимир Богданов wrote: > > HI, > > Yes, I think, because it seems to be working with nam-cuda right now: Of course, because in the meantime you upgraded your driver. NAMD, or in fact any program that uses CUDA 9.2 will _not_ run with drivers incompatible with tha

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-31 Thread Szilárd Páll
On Wed, Jan 30, 2019 at 5:14 PM wrote: > > Vlad, > > 390 is an 'old' driver now. Try something simple like installing CUDA 410.x > see if that resolves the issue. if you need to update the compiler, g++ -7 > may not work, but g++ -6 does. It is worth checking compatibility first. The GROMACS

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-31 Thread Szilárd Páll
On Wed, Jan 30, 2019 at 5:15 PM Tafelmeier, Stefanie wrote: > > Dear all, > > We are facing an issue with the CUDA toolkit. > We tried several combinations of gromacs versions and CUDA Toolkits. No > Toolkit older than 9.2 was possible to try as there are no driver for nvidia > available for a Q

Re: [gmx-users] About fprintf and debugging

2019-01-31 Thread Szilárd Páll
gmx mdrun -debug N where N is the debug level, can be 1/2. -- Szilárd On Mon, Jan 28, 2019 at 3:49 PM Mahmood Naderan wrote: > > Hi > Where should I set the flag in order to see the fprintf statements like > if (debug) > { > fprintf(debug, "PME: number of ranks = %d,

[gmx-users] methods of installing NVIDIA display drivers [forked from Re: Gromacs 2018.5 with CUDA]

2019-01-31 Thread Szilárd Páll
On Thu, Jan 31, 2019 at 3:18 PM wrote: > > > > -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > On Behalf Of Szilárd Páll > Sent: Thursday, January 31, 2019 7:06 AM > To: Discussion list for GROMACS users > Subject: Re: [gmx-user

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-31 Thread Szilárd Páll
On Thu, Jan 31, 2019 at 2:14 PM Szilárd Páll wrote: > > On Wed, Jan 30, 2019 at 5:15 PM Tafelmeier, Stefanie > wrote: > > > > Dear all, > > > > We are facing an issue with the CUDA toolkit. > > We tried several combinations of gromacs versions and CUDA T

Re: [gmx-users] info about gpus

2019-02-01 Thread Szilárd Páll
Hi, It greatly depends what your use-case is, i.e. simulation system and type of study (but if you want to scale I assume you want few longer trajectories). Cheers, -- Szilárd On Thu, Jan 31, 2019 at 6:17 PM Stefano Guglielmo wrote: > > Dear all, > I am tryin to set a new workstation and I wou

Re: [gmx-users] info about gpus

2019-02-01 Thread Szilárd Páll
On Fri, Feb 1, 2019 at 3:30 AM Moir, Michael (MMoir) wrote: > > Stephano, > > With a motherboard that doesn't split the PCI-E bandwidth, I get a 20% > improvement in computation speed with 2 GPUs. That's on the low side, yo can get 40-60% scaling across two GPUs in such desktop/workstation use-c

Re: [gmx-users] info about gpus

2019-02-04 Thread Szilárd Páll
eally strapped for time using two 1080’s may work. going from 150 ns/day to > 200 is a good time saver for smaller systems. > > If you Google gromac benchmarks you will find several articles on the issue. > > PB > > On Feb 1, 2019, at 5:58 AM, Szilárd Páll wrote: > &g

Re: [gmx-users] Make check not passing tests on 2018.5

2019-02-05 Thread Szilárd Páll
On Fri, Feb 1, 2019 at 5:01 AM David Lister wrote: > > Hello, > > I've compiled gromacs 2018.5 in double precision a couple times now and it > keeps on failing the same tests every time. This is on Ubuntu 18.04 with an > i9 7900X. > > The cmake I used was: > cmake .. -DREGRESSIONTEST_DOWNLOAD=ON -

Re: [gmx-users] offloading PME to GPUs

2019-02-07 Thread Szilárd Páll
Please see the user guide: http://manual.gromacs.org/documentation/2018.5/user-guide/mdrun-performance.html#gpu-accelerated-calculation-of-pme -- Szilárd On Thu, Feb 7, 2019 at 3:18 PM jing liang wrote: > > Hi, > > thanks for this information. I wonder if PME offload has been implemented > for mo

Re: [gmx-users] random initial failure

2019-02-11 Thread Szilárd Páll
Harry, That does not seem normal. Have you tried to run on CPU-only and see if that reproduces the issue (e.g. run mdrun -nsteps 0 -nb cpu -pme cpu a few times). -- Szilárd On Mon, Feb 11, 2019 at 11:25 AM Harry Mark Greenblatt wrote: > > BS”D > > Dear All, > > Trying to run a system with abo

Re: [gmx-users] Make check not passing tests on 2018.5

2019-02-11 Thread Szilárd Páll
id not test different compilers as I don't have any others on my system > at the moment. > > Cheers, > David > > On Tue, Feb 5, 2019 at 7:15 AM Szilárd Páll wrote: > > > On Fri, Feb 1, 2019 at 5:01 AM David Lister wrote: > > > > > > Hello, >

Re: [gmx-users] Make check not passing tests on 2018.5

2019-02-15 Thread Szilárd Páll
gmx-us...@gromacs.org > > Subject: Re: [gmx-users] Make check not passing tests on 2018.5 > > > > I have yet to have a chance to test a different compiler. The AVX_512 is > > reproducible and showed the same error in 3 tests. > > > > I hope to check gcc 8 lat

Re: [gmx-users] Compile Gromacs with OpenCL for MacBook Pro with AMD Radeon Pro 560 GPU

2019-02-19 Thread Szilárd Páll
We had some issues with the OS X OpenCL compilers being "special" and not accepting standard ways of passing include arguments. Can you try editing the src/gromacs/gpu_utils/gpu_utils_ocl.cpp file and replace "#ifdef __APPLE__" with "#if 0", compile the code and see if that works? -- Szilárd On

Re: [gmx-users] Compile Gromacs with OpenCL for MacBook Pro with AMD Radeon Pro 560 GPU

2019-02-21 Thread Szilárd Páll
.5 > Source file: src/gromacs/gpu_utils/ocl_compiler.cpp (line 507) > Function:cl_program gmx::ocl::compileProgram(FILE *, const std::string > &, const std::string &, cl_context, cl_device_id, ocl_vendor_id_t) > > Internal error (bug): > Failed to compile NBNXN kernels fo

Re: [gmx-users] Compile Gromacs with OpenCL for MacBook Pro with AMD Radeon Pro 560 GPU

2019-02-25 Thread Szilárd Páll
_OPENCL=ON > > My $HOME/.local directory is the prefix I used to install appropriate > versions of hwloc (1.11.12), libomp (7.0.1), and fftw3 (3.3.8) that I > compiled with the system’s default clang (in OSX 10.14.3, Mojave). Thanks > again, > > > Mike > > > >

Re: [gmx-users] Error detecting AMD GPU in GROMACS 2019.1

2019-02-25 Thread Szilárd Páll
Michael, Can you please post the full stdandard and log outputs (preferably through an external service). Having looked at the code, there must be an addition output that tells what type of error occurred during the sanity checks that produce the result you show. In general, this likely means tha

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-25 Thread Szilárd Páll
What are you trying to do? Using 72 threads per rank in most cases (except some extreme and unusual) will not be efficient at all. If you are sure that you still want to do that, you can override this at compile time using cmake -DGMX_OPENMP_MAX_THREADS= -- Szilárd On Thu, Feb 21, 2019 at 5:58

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-26 Thread Szilárd Páll
etup, hardware)? > Roland > > > -Original Message- > > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > > Szilárd Páll > > Sent: Monday, February 25, 2019 7:31 AM > &g

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-27 Thread Szilárd Páll
The Quadro K2200 is a low-end several generations old GPU and I strongly doubt you will see any benefit from using it. I suggest you try running mdrun -nb gpu -ntmpi 1 -ntomp 36 -pin on which will give you the (most likely best) performance you can get when using both high-end Intel CPUs and the G

Re: [gmx-users] Error detecting AMD GPU in GROMACS 2019.1

2019-02-27 Thread Szilárd Páll
where identified as > #0 while the Intel integrated chip is called GPU #1. Thanks again, and > please let me know if these were not the output files you were looking for. > > Mike > > > > > On Feb 26, 2019, at 1:56 AM, Szilárd Páll > wrote: > > > > Michael, &g

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-28 Thread Szilárd Páll
Great! Feel free to revive the thread if you have further questions. -- Szilárd On Wed, Feb 27, 2019 at 7:53 PM Lalehan Ozalp wrote: > Dear Szilárd, > > They most certainly are clear! I originally thought the GPU of the terminal > would be very useful. Your suggestions are of great help. > Tha

Re: [gmx-users] Videocard selection

2019-03-12 Thread Szilárd Páll
TL;DR if cost is no issue Turing is better. The "core" count is not the only than matters, #cores x frequncy (~ instruction rate) is the metric that does have some predicting power within a single GPU generation (but even then, the clocks a consumer GPU runs at can be wildly different from the clo

Re: [gmx-users] Gomacs 2019 build on sles12 and centos

2019-03-13 Thread Szilárd Páll
Hi, I assume the timeout does not happen with non-MPI builds, right? Have you tried a different MPI flavor? Cheers, -- Szilárd On Mon, Mar 11, 2019 at 10:54 AM Nelson Chris AWE wrote: > Hi All, > I've built Gromacs 2019 on both a CentOS 7 and SLES12 machine. > Built using gcc@7.2.0 > Dependen

Re: [gmx-users] gromacs performance

2019-03-13 Thread Szilárd Páll
Hi, First off, please post full log files; these contain much more than just the excerpts you paste in. Secondly, for parallel, multi-node runs this hardware is just too GPU-dense to achieve a good CPU-GPU load balance and scaling will be really hard too in most cases, but details will depend on

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-15 Thread Szilárd Páll
76] [ 33 77] > [ 34 78] [ 35 79] [ 36 80] [ 37 81] [ 38 82] [ 39 83] [ 40 > 84] [ 41 85] [ 42 86] [ 43 87] > GPU info: > Number of GPUs detected: 1 > #0: NVIDIA Quadro P6000, compute cap.: 6.1, ECC: no, stat: compatible > > - > > > &

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-15 Thread Szilárd Páll
known error message. > > Again, thanks a lot for your information and help. > > Best wishes, > Steffi > > > > -Ursprüngliche Nachricht- > Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto: > gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im

Re: [gmx-users] WG: WG: Issue with CUDA and gromacs

2019-03-15 Thread Szilárd Páll
liche Nachricht- > Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto: > gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von Szilárd > Páll > Gesendet: Freitag, 15. März 2019 16:27 > An: Discussion list for GROMACS users > Betreff: Re: [gmx-users] WG: WG: Issue with CU

<    1   2   3   4   5   6   7   8   >