Re: [gmx-users] Gromacs 2018.3 with CUDA - segmentation fault (core dumped)

2018-11-15 Thread Szilárd Páll
That suggest there is an issue related to the CUDA FFT library -- or something else indirectly related. Can you use a newer CUDA and try to see if with -pmefft gpu you are still getting a crash? -- Szilárd On Mon, Nov 12, 2018, 11:58 AM Krzysztof Kolman > > > Dear Benson and Szilard, > > > >

Re: [gmx-users] Gromacs 2018.3 with CUDA - segmentation fault (core dumped)

2018-11-06 Thread Szilárd Páll
Did it really crash after exactly the same number of steps the second time too? -- Szilárd On Tue, Nov 6, 2018 at 10:55 AM Krzysztof Kolman wrote: > Dear Gromacs Users, > > I just wanted to add an additional information. After doing restart, the > simulation crashed (again segmentation fault)

Re: [gmx-users] gmx kill computer?

2018-10-29 Thread Szilárd Páll
On Tue, Oct 23, 2018 at 4:04 PM Wahab Mirco < mirco.wa...@chemie.tu-freiberg.de> wrote: > On 23.10.2018 15:12, Michael Brunsteiner wrote: > > the computers are NOT overclocked, cooling works, cpu temperatures are > well below max. > > as stated above something like this happened three times, each

Re: [gmx-users] And Ryzen 8 core/16 thread use

2018-10-29 Thread Szilárd Páll
Sure, if you use the recent releases with PME offload, you might even be able to drive two GPUs with it. I also do not know of cooling issues, is there anything specific you are referring to? -- Szilárd On Thu, Oct 25, 2018 at 5:27 PM paul buscemi wrote: > > Dear Users, > > Does anyone have

Re: [gmx-users] Computational load of Constraints/COM pull force

2018-10-09 Thread Szilárd Páll
auto: https://ufile.io/4qlc6 > > dlb on: https://ufile.io/n9rme > > > I hope this makes some sense to someone! > > > Kind regards, > > > Kenneth > > > Van: gromacs.org_gmx-users-boun...@maillist.sys.kth.se < > gromacs.org_gm

Re: [gmx-users] FW: v2018.3; GPU not recognised

2018-10-04 Thread Szilárd Páll
On Thu, Oct 4, 2018 at 5:36 PM Tresadern, Gary [RNDBE] wrote: > Hi, > We are trying to build a simple workstation installation of v2018.3 that > will run with GPU support. > The build and test seems to go without errors, but when we test run new > jobs we see the GPU is not being recognized,

Re: [gmx-users] Computational load of Constraints/COM pull force

2018-09-28 Thread Szilárd Páll
Hi, The issue you are running seems to be caused by the significant load imbalance in the simulation that sometimes throws the load balancing off -- it's something I've seen before (and I thought that we solved it). The system is "tall" and most likely has a significant inhomogeneity along Z.

Re: [gmx-users] AMD vs Intel, nvidia 20xx

2018-09-26 Thread Szilárd Páll
On Wed, Sep 19, 2018 at 11:58 AM Tamas Hegedus wrote: > Hi, > > I am planning to buy 1 or 2 GPU workstations/servers for stand alone > gromacs and gromacs+plumed (w plumed I can efficiently use only 1 GPU). > I would like to get some suggestions for the best performance/price > ratio. More

Re: [gmx-users] minor issues of gmxtest.pl on mdrun-only builds

2018-09-26 Thread Szilárd Páll
Thanks for the report. Could you please file a redmine issue on redmine.gromacs.org. Would you consider uploading a fix to our code review site too? -- Szilárd On Wed, Sep 26, 2018 at 9:02 PM LAM, Tsz Nok wrote: > Dear all, > > > When I tested my mdrun-only build with MPI support (after also

Re: [gmx-users] Make check failed 2018 Gromacs on GPU workstation

2018-09-18 Thread Szilárd Páll
assignment, number of ranks, or your use of the > -nb, > -pme, and -npme options, perhaps after measuring the performance you can > get. > > On Thu, Sep 13, 2018 at 4:11 PM Szilárd Páll > wrote: > > > Test timeouts are strange, Is the machine you're running on bu

Re: [gmx-users] Make check failed 2018 Gromacs on GPU workstation

2018-09-13 Thread Szilárd Páll
Test timeouts are strange, Is the machine you're running on busy with other jobs? Regarding the regressiontest failure, can you share tests/regressiontests*/complex/octahedron/mdrun.out please? -- Szilárd On Thu, Sep 13, 2018 at 8:49 PM Phuong Tran wrote: > Hi all, > > I have been trying to

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
BTW, I'd recommend caution when using the dated d.dppc benchmark for drawing performance conclusions both because it may not be too representative of other workloads (small size, peculiar settings) and because it uses all-bonds constrained with 2fs time step which is not recommended these days

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
gt; It’s a bit confusing that in synthetic tests/games performance of i7 8700 > > is higher than Ryzen 7 2700. > > ... > > Sorry for jumping into the thread at this point, but depending on > the problem size and type, it might happen that: > >- a single R2-2700X possibly

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
Sadly, I can't recommend packaged versions of GROMACS for anything other than pre- or post-processing or non-performance critical work; these are compiled with proper SIMD support which is generally wasteful. Also, I can't (yet) recommend AMD GPUs as a buying option for consumer-grade stuff as we

Re: [gmx-users] Workstation choice

2018-09-11 Thread Szilárd Páll
(DD) incurs a "one-time performance hit" due to the additional work involved in decomposing the system. 2018-09-07 23:25 GMT+07:00 Szilárd Páll : > > > > > Are you intending to use it mostly/only for running simulations or also > as > > a desktop computer? > >

Re: [gmx-users] Workstation choice

2018-09-07 Thread Szilárd Páll
On Fri, Sep 7, 2018 at 4:15 PM Benson Muite wrote: > Check if the routines you will use have been ported to use GPUs. Time > and profile a typical run you will perform on your current hardware to > determine the bottlenecks, and then choose hardware that will perform > best on these bottlenecks

Re: [gmx-users] Workstation choice

2018-09-07 Thread Szilárd Páll
Hi, Are you intending to use it mostly/only for running simulations or also as a desktop computer? Starting with the 2018 release we offload more work to the GPU to account for the increase in the gap between the performance of CPUs and GPUs and the prevalence of (especially workstations) with

Re: [gmx-users] Heterogeneous GPU cluster question?

2018-08-29 Thread Szilárd Páll
Hi, You can use multiple types of GPUs in a single run, but it won't be ideal. Also, with Volta GPUs you'll probably be better off also offloading PME which won't scale to more than 2-3 GPUs, so probably you'll not want to use more than 2 GPUs in run with Volta. -- Szilárd On Tue, Aug 28, 2018

Re: [gmx-users] GMXRC removes trailing colon from existing MANPATH

2018-08-27 Thread Szilárd Páll
an > simply appending a colon in my bashrc. > The MANPATH is generated at build-time from the scripts/GMXRC.*.cmakein input files the result of which in turn get installed. These files need the one-liner fix ;) Cheers, -- Szilard > Peter > > > On 07-08-18 14:45, Szilárd Páll wrote

Re: [gmx-users] Feedback wanted - mdp option for preparation vs production

2018-08-24 Thread Szilárd Páll
icularly in scripted > workflows). > > Mark > > On Fri, Aug 24, 2018 at 4:11 PM Szilárd Páll > wrote: > > > The thermostat choice is an easy example where there is a clear case to > be > > made for the proposed mdp option, but what other uses are these for

Re: [gmx-users] Feedback wanted - mdp option for preparation vs production

2018-08-24 Thread Szilárd Páll
The thermostat choice is an easy example where there is a clear case to be made for the proposed mdp option, but what other uses are these for such an option? Unless there are at least a few, I'd say it's better to improve the UI messages, option documentation, manual, etc. than introduce a ~

Re: [gmx-users] [Install error] What can I do this situation?

2018-08-20 Thread Szilárd Páll
On Mon, Aug 20, 2018 at 2:54 PM 김나연 wrote: > There is error messages. TT > 1) What can I do?? > Install manually. The installation warnings suggest that you are risking strange failures (due to mixing C++ libraries). > 2) If this process succeeds, can I do GPU calculations in mdrun? > Yes,

Re: [gmx-users] high load imbalance in GMX5.1 using MARTINI FF

2018-08-09 Thread Szilárd Páll
Linda, This should indeed normally not happen, but before diving deep into the issue I'd suggest testing more recent releases of GROMACS, preferably 2018.2 so we know if there is an issue in the currently actively supported release. Secondly, load imbalance is not necessarily a bad thing if it

Re: [gmx-users] Errors while trying to install GROMACS-2018

2018-08-09 Thread Szilárd Páll
Hi, This is likely an issue with the combination of gcc and CUDA versions you are using. What are these versions? Can you install the latest CUDA (or at least recent) and see if it solves the issue? Cheers, -- Szilárd On Wed, Aug 8, 2018 at 8:00 PM Lovuit CHEN wrote: > Hi everyone, > > > I

Re: [gmx-users] NVIDIA CUDA Alanine Scanning

2018-08-07 Thread Szilárd Páll
Hi, Yes, you can use CUDA acceleration, and FEP does work, try to keep feature-parity between the GPU-accelerated and non-accelerated modes of GROMACS. Can't comment in depth about GMXPBSA, they may not have full support for newer releases from what a brief look at their mailing list shows.

Re: [gmx-users] GMXRC removes trailing colon from existing MANPATH

2018-08-07 Thread Szilárd Páll
Hi, Can you please submit your change to gerrit.gromacs.org -- and perhaps it's best if you also file an issue on redmine.gromacs.org with your brief description you posted here? Thanks, -- Szilárd On Fri, Jul 27, 2018 at 2:28 PM Peter Kroon wrote: > Hi all, > > I noticed that sourcing GMXRC

Re: [gmx-users] gromacs with mps

2018-08-07 Thread Szilárd Páll
Hi, It does sound like a CUDA/MPS setup issue, GROMACS uses relatively small amount of GPU memory, so unless you are using a very skinny GPU or a very large input, it's most likely not a GROMACS issue. BTW, have you made sure that your GPUs are not in process-exclusive mode? Cheers, -- Szilárd

Re: [gmx-users] Issue with regression test.

2018-08-07 Thread Szilárd Páll
Hi, Can you share the directory of the failed test, i.e. regressiontests/complex/nbnxn_vsite? Can you check running the regressiontests manually using 1/2/4 ranks, e.g. perl gmxtest.pl complex -nt 1 -- Szilárd On Wed, Aug 1, 2018 at 12:49 PM Raymond Arter wrote: > Dear All, > > I'm building

Re: [gmx-users] Too few cells to run on multiple cores

2018-08-07 Thread Szilárd Páll
Hi, The domain decomposition has certain algorithmic limits that you can relax, but as you notice that comes at the cost of deteriorating load balance -- and at a certain point it might come at the cost of simulations aborting mid-run (if you make -rdd too large). More load imbalance does not

Re: [gmx-users] OpenMP multithreading mdrun

2018-07-25 Thread Szilárd Páll
On Wed, Jul 25, 2018 at 4:27 PM Smith, Iris wrote: > Good Morning Gromacs Users, > > Gromacs 2018.2 was recently installed on our intel-based HPC cluster using > OpenMP (without MPI). > > Can I still utilize multiple threads within one node when utilizing mdrun? > If so, do I need to call openmp

Re: [gmx-users] mpirun and gmx_mpi

2018-07-25 Thread Szilárd Páll
GPU device which is shared > between ranks. > If you use MPS on the compute nodes, it will use MPS. If you don't, processes will share GPUs, but execution will be somewhat less efficient. -- Szilárd > > > Regards, > Mahmood > > > On Wednesday, July 25, 2018, 1:05:10 AM G

Re: [gmx-users] mpirun and gmx_mpi

2018-07-24 Thread Szilárd Páll
On Tue, Jul 24, 2018 at 3:13 PM Mahmood Naderan wrote: > No idea? Those who use GPU, which command do they use? gmx or gmx_mpi? > > That choice depends on whether you want to run across multiple compute nodes; the former can not while the latter, as it is (by default) indicates that it's using

Re: [gmx-users] Problems during installation

2018-07-19 Thread Szilárd Páll
On Mon, Jul 16, 2018 at 7:43 PM Rajat Desikan wrote: > Hi Mark, > > Thank you for the quick answer. My group is experimenting with a GPU-heavy > processor-light configuration similar to the Amber machines available from > Exxact (https://www.exxactcorp.com/AMBER-Certified-MD-Systems). In our >

Re: [gmx-users] rerun from random seeds

2018-07-12 Thread Szilárd Páll
Yes, this will generate velocities with a random seed. -- Szilárd On Thu, Jul 5, 2018 at 5:15 PM MD wrote: > Hi Gromacs folks, > > I am trying to re-run a 100 ns simulation with different velocities and > random seed. Would the setup from the mdp file as the following a good one? > > ;

Re: [gmx-users] pme grid with gpu

2018-07-12 Thread Szilárd Páll
That's the PP-PME load balancing output (see -tunepme option / http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html ). -- Szilárd On Tue, Jul 10, 2018 at 7:35 PM Mahmood Naderan wrote: > Hi, > When I run mdrun with "-nb gpu", I see the following output > > >

Re: [gmx-users] cpu threads in a gpu run

2018-07-12 Thread Szilárd Páll
On Tue, Jul 10, 2018 at 5:12 AM Mahmood Naderan wrote: > No idea? It seems to be odd. At the beginning of run, I see > > NOTE: GROMACS was configured without NVML support hence it can not exploit > application clocks of the detected Quadro M2000 GPU to improve > performance. >

Re: [gmx-users] cpu threads in a gpu run

2018-07-12 Thread Szilárd Páll
On Mon, Jul 9, 2018 at 12:43 PM Mahmood Naderan wrote: > Hi, > When I run "-nt 16 -nb cpu", I see nearly 1600% cpu utilization. You request the CPU to do the work, so all cores are fully utilized. > However, when I run "-nt 16 -nb gpu", I see about 600% cpu utilization. The default

Re: [gmx-users] GROMACS- suggestion for GPU buying

2018-07-12 Thread Szilárd Páll
If price does not matter, get V100s; if it matters somewhat get TITAN-V's. Same applies if you want best performance per simulation. If you want best perf/buck, the 1080 Ti is still better investment (or you could wait an see if the next-gen consumer cards come out soon). -- Szilárd On Tue, Jul

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-21 Thread Szilárd Páll
--- > Total 48578.997 544107.322 100.0 > > - > (*) Note that with separate PME ranks, the walltime column actually sums to > twice the total rep

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-18 Thread Szilárd Páll
ut without seeing a log file, it's hard to tell.. The result is a solid increase in performance on a small-ish system (20K > atoms): 90 ns/day instead of 65-70. I don't use this box for anything > except prototyping, but still the swap + tweaks were pretty useful. > > Alex > > &

Re: [gmx-users] GTX 960 vs Tesla K40

2018-06-15 Thread Szilárd Páll
Hi, Regarding the K40 vs GTX 960 question, the K40 will likely be a bit faster (though it'l consume more power if that matters). The difference will be at most 20% in total performance, I think -- and with small systems likely negligible (as a smaller card with higher clocks is more efficient at

Re: [gmx-users] how to improve computing?

2018-06-14 Thread Szilárd Páll
Vlad, Please share your log file(s), it's easier to be concrete about a concrete case. mdrun will by default attempt to use all resources available, unless there are some hard limitations that prevent this (e.g. very small number of atoms can't be decomposed) or there is a reason to believe that

Re: [gmx-users] Install Gromacs Cuda MacBook pro with eGPU OS X 10.13.4

2018-05-31 Thread Szilárd Páll
Hi, The issue is a limitation imposed by the CUDA toolkit; you could try using the native clang support by passing GMX_CLANG_CUDA=ON to cmake. Also note that: - I might not remember correctly the reviews, but isn't NVIDIA officially unsupported in eGPUs (might still work, but beware)? - you only

Re: [gmx-users] strange GPU load distribution

2018-05-07 Thread Szilárd Páll
Hi, You have at least one option more elegant than using a separate binary for EM. Set GMX_DISABLE_GPU_DETECTION=1 environment variable which is the internal GROMACS override that forces the detection off for cases similar to yours. That should solve the detection latency. If for some reason it

Re: [gmx-users] strange GPU load distribution

2018-04-27 Thread Szilárd Páll
The second column is PIDs so there is a whole lot more going on there than just a single simulation, single rank using two GPUs. That would be one PID and two entries for the two GPUs. Are you sure you're not running other processes? -- Szilárd On Thu, Apr 26, 2018 at 5:52 AM, Alex

Re: [gmx-users] Problem with CUDA

2018-04-07 Thread Szilárd Páll
iginal Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, April 06, 2018 2:40 PM > To: Discussion list for GROMACS users <gmx-us...@gromacs.org> > Cc

Re: [gmx-users] GROMACS 2018 MDRun: Multiple Ranks/GPU Issue

2018-04-07 Thread Szilárd Páll
Your GPUs are in process exclusive mode which makes it impossible for multiple ranks to use a GPU; see nvidia-smi --compute-mode option. -- Szilárd On Fri, Apr 6, 2018 at 10:14 PM, Hollingsworth, Bobby wrote: > Hello all, > > I'm tuning MDrun on a node with 24

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
> Chris > > -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, April 06, 2018 12:05 PM > To: Discussion list for GROMACS users <gmx-us...@gromacs.org&g

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
needed? -- Szilárd On Fri, Apr 6, 2018 at 6:57 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > I think the fpic errors can't be caused by missing rdc=true because > the latter refers to the GPU _device_ code, but GROMACS does not need > relocatable device code, so that shou

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
pp1_ii_a1eafeba' > > Chris > > -Original Message- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, April 06, 2018 10:17 AM > To: Discussion list for GROM

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-06 Thread Szilárd Páll
date.html [5] https://www.microway.com/knowledge-center-articles/detailed-specifications-of-the-skylake-sp-intel-xeon-processor-scalable-family-cpus/ -- Szilárd On Thu, Apr 5, 2018 at 10:22 AM, Jochen Hub <j...@gwdg.de> wrote: > > > Am 03.04.18 um 19:03 schrieb Szilárd Páll: > >>

Re: [gmx-users] Problem with CUDA

2018-04-06 Thread Szilárd Páll
Hi, What is the reason for using the custom CMake options? What's the -rdc=true for -- I don't think it's needed and it can very well be causing the issue. Have you tried to actually do an as-vanilla-as-possible build? -- Szilárd On Thu, Apr 5, 2018 at 6:52 PM, Borchert, Christopher B

Re: [gmx-users] How to search answers for previous posts?

2018-04-06 Thread Szilárd Páll
Use google the "site:" keyword is ideal for that. -- Szilárd On Fri, Apr 6, 2018 at 3:51 PM, ZHANG Cheng <272699...@qq.com> wrote: > Dear Gromacs, > I know I can see all the post from > https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/ > > > but can I search from this link? I do not

[gmx-users] Xeon Phi's Re: Number of Xeon cores per GTX 1080Ti

2018-04-04 Thread Szilárd Páll
On Wed, Apr 4, 2018 at 4:01 PM, Raman Preet Singh wrote: > Szilard: > > Thanks for sharing this valuable info. Very helpful. > > We are looking forward to procure a Xeon Phi processor based system. Specs > are yet to be finalized. FYI: the follow-up to the current

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-04 Thread Szilárd Páll
Please don't shift the topic of an existing thread, rather create a new one. Will reply to topics relevant to this discussion here, the rest separately. On Wed, Apr 4, 2018 at 4:01 PM, Raman Preet Singh wrote: > Szilard: > > Thanks for sharing this valuable info.

Re: [gmx-users] Performance

2018-04-03 Thread Szilárd Páll
> > Simulation doesn't crush and doesn't generate error message. > It take forever without updating report in log file or other output files. > > Is this a bug? > > > > On Thu, Mar 29, 2018 at 7:58 AM, Szilárd Páll <pall.szil...@gmail.com> > wrote: >

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-03 Thread Szilárd Páll
On Tue, Apr 3, 2018 at 5:10 PM, Jochen Hub <j...@gwdg.de> wrote: > > > Am 03.04.18 um 16:26 schrieb Szilárd Páll: > >> On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub <j...@gwdg.de> wrote: >>> >>>benchmar >>> >>

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-04-03 Thread Szilárd Páll
On Tue, Apr 3, 2018 at 3:41 PM, Jochen Hub <j...@gwdg.de> wrote: > > > Am 29.03.18 um 20:57 schrieb Szilárd Páll: > >> Hi Jochen, >> >> For that particular benchmark I only measured performance with >> 1,2,4,8,16 cores with a few different kinds of GPUs.

Re: [gmx-users] Number of Xeon cores per GTX 1080Ti

2018-03-29 Thread Szilárd Páll
Hi Jochen, For that particular benchmark I only measured performance with 1,2,4,8,16 cores with a few different kinds of GPUs. It would be easy to do the runs on all possible core counts with increments of 1, but that won't tell a whole lot more than what the performance is of a run using a

Re: [gmx-users] Performance

2018-03-29 Thread Szilárd Páll
rote: >> >> > I am attaching the file. >> > >> > Thank you. >> > >> > Myunggi Yi >> > >> > On Wed, Mar 28, 2018 at 11:40 AM, Szilárd Páll <pall.szil...@gmail.com> >> > wrote: >> > >> >

Re: [gmx-users] Performance

2018-03-28 Thread Szilárd Páll
Again, please share the exact log files / description of inputs. What does "bad performance" mean? -- Szilárd On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi wrote: > Dear users, > > I have two questions. > > > 1. I used to run typical simulations with the following command.

Re: [gmx-users] mdrun on single node with GPU

2018-03-28 Thread Szilárd Páll
Hi, I can't reproduce your issue, can you please share a full log file, please? Cheers, -- Szilárd On Wed, Mar 28, 2018 at 5:26 AM, Myunggi Yi wrote: > Dear users, > > I am running simulation with gromacs 2018.1 version > on a computer with quad core and 1 gpu. > > I

Re: [gmx-users] Excessive and gradually increasing memory usage with OpenCL

2018-03-27 Thread Szilárd Páll
Hi, This is an issue I noticed recently, but I thought it was only affecting some use-cases (or some runtimes). However, it seems it's a broader problem. It is under investigation, but for now it seems that eliminate it (or strongly diminish its effects) by turning off GPU-side task timing. You

Re: [gmx-users] cudaMallocHost filed: unknown error

2018-03-26 Thread Szilárd Páll
As a side-note, your mdrun invocation does not seem suitable for GPU accelerated runs, you'd most likely be better of running fewer ranks. -- Szilárd On Fri, Mar 23, 2018 at 9:26 PM, Christopher Neale wrote: > Hello, > > I am running gromacs 5.1.2 on single nodes

Re: [gmx-users] Confusions about rlist>=rcoulomb and rlist>=rvdw in the mdp options in Gromacs 2016 (or 2018) user guide??

2018-03-26 Thread Szilárd Páll
rlist >= rcoulomb / rvdw is the correct one, the list cutoff has to be at least as long as the longest of the interaction cut-offs. -- Szilárd On Mon, Mar 26, 2018 at 1:05 PM, huolei peng wrote: > In the user guide of Gromacs 2016 (or 2018), it shows " rlist>=rcoulomb " >

Re: [gmx-users] 2018 installation make check errors, probably CUDA related

2018-03-23 Thread Szilárd Páll
essage- > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se > [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of > Szilárd Páll > Sent: Friday, 23 March 2018 11:45 > To: Discussion list for GROMACS users <gmx-us...@gromacs.org> > Cc: gromacs.org_gmx-users@maill

Re: [gmx-users] 2018 installation make check errors, probably CUDA related

2018-03-23 Thread Szilárd Páll
Hi, Please provide the output of unit tests with 2018.1, it has improved the error reporting targeting exactly these types of errors. Cheers, -- Szilárd On Thu, Mar 22, 2018 at 5:44 PM, Tresadern, Gary [RNDBE] wrote: > Hi Mark, > Thanks, I tried 2018-1 and was hopeful it

Re: [gmx-users] Problem when installing gromacs 5.1.4 on linux

2018-03-15 Thread Szilárd Páll
Your fftw download is not valid (corrupted, interrupted); as the cmake output states, it's downloading: http://www.fftw.org/fftw-3.3.4.tar.gz which has the former MD5 checksum, but yours produces the latter. Try cleaning your build tree and re-running cmake (to re-download fftw). -- Szilárd On

Re: [gmx-users] rvdw and rcoulomb

2018-03-12 Thread Szilárd Páll
Note that rcoulomb, unlike rvdw, when using a PME long-range electrostatics, is tunable (together with the PME grid spacing). -- Szilárd On Mon, Mar 12, 2018 at 3:43 PM, Justin Lemkul wrote: > > > On 3/11/18 7:33 PM, Ahmed Mashaly wrote: >> >> Dear users, >> Can I reduce the

Re: [gmx-users] Minimal PCI Bandwidth for Gromacs and Infiniband?

2018-03-12 Thread Szilárd Páll
om/pdf/GROMACS_Analysis_Intel_E5_2697v3_K40_K80_GPUs.pdf [2] http://mvapich.cse.ohio-state.edu/performance/pt_to_pt -- Szilárd On Mon, Mar 12, 2018 at 4:06 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > Hi, > > Note that it matters a lot how far you want to parallelize and what > kind of r

Re: [gmx-users] Minimal PCI Bandwidth for Gromacs and Infiniband?

2018-03-12 Thread Szilárd Páll
Hi, Note that it matters a lot how far you want to parallelize and what kind of runs would you do? 10 GbE with RoCE may well be enough to scale across a couple of such nodes, especially if you can squeeze PME into a single node and avoid the MPI collectives across the network. You may not even

Re: [gmx-users] 2018: large performance variations

2018-03-05 Thread Szilárd Páll
ecommend the cuda-memtest tool (instead of the AFAIK outdated/unmaintained memtestG80). Cheers, -- Szilárd > > > > === Why be happy when you could be normal? > > > -- > *From:* Szilárd Páll <pall.szil...@gmail.com> > *To

Re: [gmx-users] 2018: large performance variations

2018-03-02 Thread Szilárd Páll
BTW, we have considered adding a warmup delay to the tuner, would you be willing to help testing (or even contributing such a feature)? -- Szilárd On Fri, Mar 2, 2018 at 7:28 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > Hi Michael, > > Can you post full logs, please?

Re: [gmx-users] 2018: large performance variations

2018-03-02 Thread Szilárd Páll
Hi Michael, Can you post full logs, please? This is likely related to a known issue where CPU cores (and in some cases GPUs too) may take longer to clock up and get a stable performance than the time the auto-tuner takes to do a few cycles of measurements. Unfortunately we do not have a good

Re: [gmx-users] stride

2018-03-02 Thread Szilárd Páll
Indeed if the two jobs do not know of each other, both will pin to the same set of threads -- the default _shoud_ be 0,1,2,3,4,5 because it assumes that you want to maximize performance with 6 threads only, and to do so it pins one thread/core (i.e. uses stride 2). When sharing a node among two

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
iew, please read some of the GROMACS papers ( http://www.gromacs.org/Gromacs_papers) or tldr see https://goo.gl/AGv6hy (around slides 12-15). Cheers, -- Szilárd > > > > Regards, > Mahmood > > > > > > > On Friday, March 2, 2018, 3:24:41 PM GMT+3:30, Szilár

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
Once again, full log files, please, not partial cut-and-paste, please. Also, you misread something because your previous logs show: -nb cpu -pme gpu: 56.4 ns/day -nb cpu -pme gpu -pmefft cpu 64.6 ns/day -nb cpu -pme cpu 67.5 ns/day So both mixed mode PME and PME on CPU are faster, the latter

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
Have you read the "Types of GPU tasks" section of the user guide? -- Szilárd On Thu, Mar 1, 2018 at 3:34 PM, Mahmood Naderan wrote: > >Again, first and foremost, try running PME on the CPU, your 8-core Ryzen > will be plenty fast for that. > > > Since I am a computer guy

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
No, that does not seem to help much because the GPU is rather slow at getting the PME Spread done (there's still 12.6% wait for the GPU to finish that), and there are slight overheads that end up hurting performance. Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will be

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
On Thu, Mar 1, 2018 at 8:25 AM, Mahmood Naderan wrote: > >(try the other parallel modes) > > Do you mean OpenMP and MPI? > No, I meant different offload modes. > > >- as noted above try offloading only the nonbondeds (or possibly the > hybrid PME mode -pmefft cpu) > >

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
slightly dated and low-end similar board as the GTX 950 -- Szilárd On Wed, Feb 28, 2018 at 10:26 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > Thanks! > > Looking at the log file, as I guessed earlier, you can see the following: > > - Given that you have a rather low

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
Thanks! Looking at the log file, as I guessed earlier, you can see the following: - Given that you have a rather low-end GPU and a fairly fast workstation CPU the run is *very* GPU-bound: the CPU spends 16.4 + 54.2 = 70.6% waiting for the GPU (see lines 628 and 630) - this means that the

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
The list does not accept attachments, so please use a file sharing or content sharing website so everyone can see your data and has the context. -- Szilárd On Wed, Feb 28, 2018 at 7:51 PM, Mahmood Naderan wrote: > >Additionally, you still have not provided the *mdrun log

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
| > | GPU PID Type Process name > Usage | > |=== > ==| > |0 1180 G /usr/lib/xorg/Xorg > 141MiB | > |0 1651 G compiz > 46MiB | > |0 3604

Re: [gmx-users] cpu/gpu utilization

2018-02-26 Thread Szilárd Páll
Hi, Please provide details, e.g. the full log so we know what version, on what hardware, settings etc. you're running. -- Szilárd On Mon, Feb 26, 2018 at 8:02 PM, Mahmood Naderan wrote: > Hi, > > While the cut-off is set to Verlet and I run "gmx mdrun -nb gpu -deffnm >

Re: [gmx-users] 2018 performance question

2018-02-20 Thread Szilárd Páll
Hi Michael, What you observe is most likely due to v2018 by default shifting the PME work to the GPU which will often mean fewer CPU cores are needed and runs become more GPU-bound leaving the CPU without work for part of the runtime. This should be easily seen by comparing the log files.

Re: [gmx-users] Worse GROMACS performance with better specs?

2018-02-20 Thread Szilárd Páll
On Fri, Jan 12, 2018 at 2:35 AM, Jason Loo Siau Ee wrote: > Dear Carsten, > > Look's like we're seeing the same thing here, but only when using gcc 4.5.3: > > Original performance (gcc 5.3.1, AVX512, no hwloc support): 49 ns/day > > With hwloc support: > gcc 4.5.3,

[gmx-users] multi-replica runs with GPUs [fork of Re: Gromacs 2018 and GPU PME ]

2018-02-20 Thread Szilárd Páll
formance reasons is that you have at least 1 core per GPU (cores not a hardware threads). Cheers, -- Szilárd > > Thanks so much, > Dan > > > > On Fri, Feb 9, 2018 at 10:27 AM, Szilárd Páll <pall.szil...@gmail.com> > wrote: > > > On Fri, Feb 9, 2018 at 4:25 PM

Re: [gmx-users] GPU problem with running gromacs.2018

2018-02-16 Thread Szilárd Páll
;- > gencode;arch=compute_70,code=compute_70;-use_fast_math;- > D_FORCE_INLINES;; ;-msse4.1;-std=c++11;-O3;-DNDEBUG;-funroll-all- > loops;-fexcess-precision=fast;CUDA driver:9.0CUDA > runtime: 9.0 >

Re: [gmx-users] GPU problem with running gromacs.2018

2018-02-15 Thread Szilárd Páll
PS: Also, what you pasted in here states "2016.4", but your subject claims version 2018 -- Szilárd On Thu, Feb 15, 2018 at 6:23 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > Please provide a full log file output. > -- > Szilárd > > > On Thu, Feb 15, 20

Re: [gmx-users] GPU problem with running gromacs.2018

2018-02-15 Thread Szilárd Páll
Please provide a full log file output. -- Szilárd On Thu, Feb 15, 2018 at 6:11 PM, Osmany Guirola Cruz wrote: > Hi > > I am having problems running mdrun command compiled with GPU > support(cuda 9.0). > here is the output of the mdrun command > >

Re: [gmx-users] compiling 2018 with GPU doesn't work on my debian stretch

2018-02-15 Thread Szilárd Páll
Option A) Get a gcc 5.x (e.g. compile from source) Option B) Install CUDA 9.1 (and required driver) which is compatible with gcc 6.3 (http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html) -- Szilárd On Thu, Feb 15, 2018 at 2:34 PM, Michael Brunsteiner wrote: >

Re: [gmx-users] REMD DLB bug

2018-02-12 Thread Szilárd Páll
Hi, The fix will be released in an upcoming 2016.5 patch release. (which you can see in the redmine issue page "Target version" field BTW). Cheers, -- Szilárd On Mon, Feb 12, 2018 at 2:49 PM, Akshay wrote: > Hello All, > > I was running REMD simulations on Gromacs

Re: [gmx-users] Tests with Threadripper and dual gpu setup

2018-02-09 Thread Szilárd Páll
Hi, Thanks for the report! Did you build with or without hwloc? There is a known issue with the automatic pin stride when not using hwloc which will lead to a "compact" pinning (using half of the cores with 2 threads/core) when <=half of the threads are launched (instead of using all cores 1

Re: [gmx-users] GPU load from nvidia-smi

2018-02-09 Thread Szilárd Páll
On Thu, Feb 8, 2018 at 10:20 PM, Mark Abraham wrote: > Hi, > > On Thu, Feb 8, 2018 at 8:50 PM Alex wrote: > > > Got it, thanks. Even with the old style input I now have a 42% speed up > > with PME on GPU. How, how can I express my enormous

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > Hi, > > First of all,have you read the docs (admittedly somewhat brief): > http://manual.gromacs.org/documentation/2018/user-guide/ > mdrun-performance.html#types-of-gpu-tasks > > The cu

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
Hi, First of all,have you read the docs (admittedly somewhat brief): http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks The current PME GPU was optimized for single-GPU runs. Using multiple GPUs with PME offloaded works, but this mode hasn't been an

Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-09 Thread Szilárd Páll
> On Thu, Feb 8, 2018 at 12:27 PM, Szilárd Páll <pall.szil...@gmail.com> > wrote: > > > Note that the actual mdrun performance need not be affected both of it's > > it's a driver persistence issue (you'll just see a few seconds lag at > mdrun > > startup) or some oth

Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-08 Thread Szilárd Páll
Note that the actual mdrun performance need not be affected both of it's it's a driver persistence issue (you'll just see a few seconds lag at mdrun startup) or some other CUDA application startup-related lag (an mdrun run does mostly very different kind of things than this set of particular unit

Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-08 Thread Szilárd Páll
for our rather simple unit tests that should take milliseconds rather than seconds when PM is on (or X is running). -- Szilárd On Thu, Feb 8, 2018 at 6:50 PM, Szilárd Páll <pall.szil...@gmail.com> wrote: > On Thu, Feb 8, 2018 at 6:46 PM, Alex <nedoma...@gmail.com> wrote: > >> A

  1   2   3   4   5   6   >