Hi Carsten, Thank you for the advice. Using that as a starting point I think I now understand the Thread-MPI / OpenMP system a little better. Since my server does have 4 CPU sockets, 8 cores/socket, and 2 threads/core, -ntmpi 4 therefore designates 1 MPI-thread per socket, and 16 open MP threads per MPI-thread, does that sound right? Doing this doubles my simulation speed and I no longer get the warner of the GPU being so drastically underused. Now, as I monitor the run I see that GROMACS is still only using between 8-12 full time CPU threads, any idea why that is and if I can increase that?
Best, Jason > Date: Mon, 15 Dec 2014 12:57:19 +0100 > From: Carsten Kutzner <ckut...@gwdg.de> > To: gmx-us...@gromacs.org > Subject: Re: [gmx-users] Trouble balancing GPU/CPU force calculation > load, ratio = 0.09 > Message-ID: <f1908832-1240-4bcf-9423-216c6c530...@gwdg.de> > Content-Type: text/plain; charset=windows-1252 > > Hi, > > from the log file it seems that you were actually using 64 OpenMP threads. > This is not very efficient, you could try to start mdrun with 4 thread-MPI > ranks (instead of 1), e.g. > > mdrun -ntmpi 4 -gpu_id 0000 -s ? > > Could it be that another process was running on your node while you > ran the simulation? > > Carsten > > > On 15 Dec 2014, at 12:45, Jason Hill <jason.h...@zoologi.su.se> wrote: > >> Hello list, >> >> I am simulating a protein in water and am concerned that I am not using my >> hardware to the best of it?s abilities. Here >> (https://drive.google.com/file/d/0BwAaTxAET7c5VkZERkFsa1cyRlk/view?usp=sharing >> >> <https://drive.google.com/file/d/0BwAaTxAET7c5VkZERkFsa1cyRlk/view?usp=sharing>) >> is the log file from a 1 nanosecond simulation. The only piece of >> information missing from it that may be of use is that I am using the >> OPLS/AA force field. Additionally, GROMACS only seems to be using 8-12 cores >> of the 64 available despite it?s complaint that the GPU is being >> underutilized. Please take a look and if you can, give me some advice about >> improving my simulation efficiency. >> >> Best regards, >> Jason >> >> >> >> Jason Hill, Ph.D. >> Wheat Lab >> Zoologiska Institutionen >> Stockholms Universitet >> D-419 Svante Arrhenius v 18B >> S-10691 Stockholm Sweden >> >> >> >> >> >> -- >> Gromacs Users mailing list >> >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! >> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> >> * For (un)subscribe requests visit >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a >> mail to gmx-users-requ...@gromacs.org. > > > -- > Dr. Carsten Kutzner > Max Planck Institute for Biophysical Chemistry > Theoretical and Computational Biophysics > Am Fassberg 11, 37077 Goettingen, Germany > Tel. +49-551-2012313, Fax: +49-551-2012302 > http://www.mpibpc.mpg.de/grubmueller/kutzner > http://www.mpibpc.mpg.de/grubmueller/sppexa >
signature.asc
Description: Message signed with OpenPGP using GPGMail
-- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.