Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
On Fri, Mar 2, 2018 at 1:57 PM, Mahmood Naderan wrote: > Sorry for the confusion. My fault... > I saw my previous post and found that I missed something. In fact, I > couldn't run "-pme gpu". > > So, once again, I ran all the commands and uploaded the log files > > > gmx

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Sorry for the confusion. My fault... I saw my previous post and found that I missed something. In fact, I couldn't run "-pme gpu". So, once again, I ran all the commands and uploaded the log files gmx mdrun -nobackup -nb cpu -pme cpu -deffnm md_0_1 https://pastebin.com/RNT4XJy8 gmx mdrun

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Szilárd Páll
Once again, full log files, please, not partial cut-and-paste, please. Also, you misread something because your previous logs show: -nb cpu -pme gpu: 56.4 ns/day -nb cpu -pme gpu -pmefft cpu 64.6 ns/day -nb cpu -pme cpu 67.5 ns/day So both mixed mode PME and PME on CPU are faster, the latter

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Command is "gmx mdrun -nobackup -pme cpu -nb gpu -deffnm md_0_1" and the log says R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G On 1 MPI rank, each using 16 OpenMP threads  Computing:  Num   Num  Call    Wall time Giga-Cycles Ranks

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Magnus Lundborg
Have you tried the mdrun options: -pme cpu -nb gpu -pme cpu -nb cpu Cheers, Magnus On 2018-03-02 07:55, Mahmood Naderan wrote: If you mean [1], then yes I read that and that recommends to use Verlet for the new algorithm depicted in  figures. At least that is my understanding about

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
If you mean [1], then yes I read that and that recommends to use Verlet for the new algorithm depicted in  figures. At least that is my understanding about offloading. If I read the wrong document or you mean there is also some other options, please let me know. [1]

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
Have you read the "Types of GPU tasks" section of the user guide? -- Szilárd On Thu, Mar 1, 2018 at 3:34 PM, Mahmood Naderan wrote: > >Again, first and foremost, try running PME on the CPU, your 8-core Ryzen > will be plenty fast for that. > > > Since I am a computer guy

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
>Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will >be plenty fast for that. Since I am a computer guy and not a chemist, the question may be noob! What do you mean exactly by running pme on cpu? You mean "-nb cpu"? or you mean setting cut-off to Group instead of

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
No, that does not seem to help much because the GPU is rather slow at getting the PME Spread done (there's still 12.6% wait for the GPU to finish that), and there are slight overheads that end up hurting performance. Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will be

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
>- as noted above try offloading only the nonbondeds (or possibly the hybrid >PME mode -pmefft cpu) So, with "-pmefft cpu", I don't see any good impact!See the log at https://pastebin.com/RTYaKSne I will use other options to see the effect. Regards, Mahmood -- Gromacs Users mailing list

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Szilárd Páll
On Thu, Mar 1, 2018 at 8:25 AM, Mahmood Naderan wrote: > >(try the other parallel modes) > > Do you mean OpenMP and MPI? > No, I meant different offload modes. > > >- as noted above try offloading only the nonbondeds (or possibly the > hybrid PME mode -pmefft cpu) > >

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>(try the other parallel modes) Do you mean OpenMP and MPI? >- as noted above try offloading only the nonbondeds (or possibly the hybrid >PME mode -pmefft cpu) May I know how? Which part of the documentation says about that? Regards, Mahmood -- Gromacs Users mailing list * Please search

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
PS: Care to know what can you do? - as noted above try offloading only the nonbondeds (or possibly the hybrid PME mode -pmefft cpu) - check if your GPU has application clocks that can be bumped - if you have the means, consider getting a bit faster GPU; the Quadro M2000 in your machine is both

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
Thanks! Looking at the log file, as I guessed earlier, you can see the following: - Given that you have a rather low-end GPU and a fairly fast workstation CPU the run is *very* GPU-bound: the CPU spends 16.4 + 54.2 = 70.6% waiting for the GPU (see lines 628 and 630) - this means that the

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>The list does not accept attachments, so please use a file sharing or content >sharing website so >everyone can see your data and has the context. I uploaded here https://pastebin.com/RCkkFXPx Regards, Mahmood -- Gromacs Users mailing list * Please search the archive at

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
The list does not accept attachments, so please use a file sharing or content sharing website so everyone can see your data and has the context. -- Szilárd On Wed, Feb 28, 2018 at 7:51 PM, Mahmood Naderan wrote: > >Additionally, you still have not provided the *mdrun log

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>Additionally, you still have not provided the *mdrun log file* I requested. >top output is not what I asked for. See the attached file. Regards, Mahmood -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Szilárd Páll
Your run is probably GPU-bound because you have a rather slow GPU and as per new mdrun defaults both PME and nonbondeds are offloaded which may not be ideal for your case. Try the different offload modes to see which one is best on your hardware. Additionally, you still have not provided the

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
I forgot to say that gromacs reports No option -multi Using 1 MPI thread Using 16 OpenMP threads 1 GPU auto-selected for this run. Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:   PP:0,PME:0 NOTE: GROMACS was configured without NVML support hence it can not exploit  

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
By runing gmx mdrun -nb gpu -deffnm md_0_1 I see the following outputs $ top -b  | head -n 10 top - 19:14:10 up 7 min,  1 user,  load average: 4.54, 1.40, 0.54 Tasks: 344 total,   1 running, 343 sleeping,   0 stopped,   0 zombie %Cpu(s):  7.1 us,  0.5 sy,  0.0 ni, 91.9 id,  0.4 wa,  0.0 hi,  0.0

Re: [gmx-users] cpu/gpu utilization

2018-02-26 Thread Szilárd Páll
Hi, Please provide details, e.g. the full log so we know what version, on what hardware, settings etc. you're running. -- Szilárd On Mon, Feb 26, 2018 at 8:02 PM, Mahmood Naderan wrote: > Hi, > > While the cut-off is set to Verlet and I run "gmx mdrun -nb gpu -deffnm >

[gmx-users] cpu/gpu utilization

2018-02-26 Thread Mahmood Naderan
Hi, While the cut-off is set to Verlet and I run "gmx mdrun -nb gpu -deffnm input_md", I see that 9 threads out of total logical 16 threads are running on the cpu while the gpu is utilized. The gmx also says No option -multi Using 1 MPI thread Using 16 OpenMP threads I want to know, why 9