On Thu, Feb 8, 2018 at 10:20 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> On Thu, Feb 8, 2018 at 8:50 PM Alex <nedoma...@gmail.com> wrote:
>
> > Got it, thanks. Even with the old style input I now have a 42% speed up
> > with PME on GPU. How, how can I express my enormous gratitude?!
> >
>
> Do the science, cite the papers, spread the word, help others, make quality
> bug reports :-) Glad you like it!
>

A few more things to add: participate in the community! E.g.

- help us with early testing (e.g. when we release a beta or release
candidate we generally get extremely limited interest in testing which
would help in ironing out issues early before the actual release)

- give us feedback what works and what does not work so well; it's easy for
developers to be biased by their own or their most vocally complaining
close fried's personal preferences (for features, user interface, command
line function, etc.)

- share your knowledge on the mailing list

Cheers,
--
Szilárd


>
> Mark
>
> On Thu, Feb 8, 2018 at 12:44 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > Yes. Note the new use of -gputasks. And perhaps check out
> > > http://manual.gromacs.org/documentation/2018-latest/
> > > user-guide/mdrun-performance.html#types-of-gpu-tasks
> > > because
> > > things are now different.
> > >
> > > gmx mdrun -ntmpi 3 -npme 1 -nb gpu -pme gpu is more like what you want.
> > >
> > > Mark
> > >
> > > On Thu, Feb 8, 2018 at 8:36 PM Alex <nedoma...@gmail.com> wrote:
> > >
> > > > I think this should be a separate question, given all the recent mess
> > > with
> > > > the utils tests...
> > > >
> > > > I am testing mdrun (v 2018) on a system that's trivial and close to
> a 5
> > > x 5
> > > > x 5 box filled with water and some ions. We have three GPUs and the
> run
> > > is
> > > > with -nt 18 -gpu_id 012 -pme -gpu.
> > > >
> > > > nvidia-smi reports 65% load on 0 and nothing on 1 and 2. Is this
> > normal?
> > > >
> > > > Thanks,
> > > >
> > > > Alex
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to