Re: [gmx-users] restoring pullf.xvg file

2018-02-09 Thread Mark Abraham
Unfortunately not

Mark

On Fri, Feb 9, 2018, 17:56 Nick Johans  wrote:

>  Hi,
>
> I deleted the pullf.xvg file unintentionally. Is there anyway to restore
> and reproduce it from other outputs?
>
> Best regards
> -Nick
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Domain decomposition for parallel simulations

2018-02-09 Thread Mark Abraham
Hi,

On Fri, Feb 9, 2018, 17:15 Kevin C Chan  wrote:

> Dear Users,
>
> I have encountered the problem of "There is no domain decomposition for n
> nodes that is compatible with the given box and a minimum cell size of x
> nm" and by reading through the gromacs website and some threads I
> understand that the problem might be caused by breaking the system into too
> small boxes by too many ranks. However, I have no idea how to get the
> correct estimation of suitable paralleling parameters. Hope someone could
> share his experience.
>
> Here are information stated in the log file:
> *Initializing Domain Decomposition on 4000 ranks*
> *Dynamic load balancing: on*
> *Will sort the charge groups at every domain (re)decomposition*
> *Initial maximum inter charge-group distances:*
> *two-body bonded interactions: 0.665 nm, Dis. Rest., atoms 23558 23590*
> *  multi-body bonded interactions: 0.425 nm, Proper Dih., atoms 12991
> 12999*
> *Minimum cell size due to bonded interactions: 0.468 nm*
> *Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819
> nm*
> *Estimated maximum distance required for P-LINCS: 0.819 nm*
>

Here we see mdrun report how large it needs to make the domains to ensure
they can do their job - in this case P-LINCS is the most demanding.

*This distance will limit the DD cell size, you can override this with
> -rcon*
> *Guess for relative PME load: 0.11*
> *Will use 3500 particle-particle and 500 PME only ranks*
> *This is a guess, check the performance at the end of the log file*
> *Using 500 separate PME ranks, as guessed by mdrun*
>

Mdrun guessed poorly, as we will see.

*Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25*
> *Optimizing the DD grid for 3500 cells with a minimum initial size of 1.024
> nm*
>

That's 1.25 × 0.819, so that domains can cope as particles move around.

*The maximum allowed number of cells is: X 17 Y 17 Z 17*
>

Thus the grid that produces 3500 ranks can have no dimension greater than
17.

And I got this afterwards:
> *Fatal error:*
> *There is no domain decomposition for 3500 ranks that is compatible with
> the given box and a minimum cell size of 1.02425 nm*
>
> Here are some questions:
> 1. the maximum allowed number of cells is 17x17x17 which is 4913 and seems
> to be larger than the requested 3500 particle-particle ranks, so the
> minimum cell size is not causing the problem?
>

It is. The prime factors of 3500 are not very forgiving. The closest
factorization that might produce a grid with all dimensions below 17 is 25
× 14 × 10. So mdrun painted itself into a corner when choosing 3500 PP
ranks. The choice of decomposition is not trivial (see one of my published
works, hint hint), and it is certainly possible that using less hardware
provides better performance through making it possible to have two PP and
PME decompositions have mutually agreeable decompositions that leads to
better message passing performance. 4000 is very awkward given the
constraint of 17. Maybe 16x16x15 overall ranks is good.

2. Where does this 1.024 nm comes from? We can see the inter charge-group
> distances are listed as 0.665 and 0.425 nm
> 3. The distance restraint between atoms 23558 23590 was set explicitly (or
> added manually) in the topology file and should be around 0.32 nm by using
> [intermolecular_interactions]. How could I know my manual setting is
> working or not? As it has shown a different value.
>

Well one of you is right, but I can't tell which :-) Try measuring it in a
different way.

Mark


> Thanks in advance,
> Kevin
> OSU
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Mark Abraham
Hi,

On Fri, Feb 9, 2018, 18:05 Alex  wrote:

> Just to quickly jump in, because Mark suggested taken a look at the
> latest doc and unfortunately I must admit that I didn't understand what
> I read. I appear to be especially struggling with the idea of gputasks.
>

Szilard's link specifically targets that issue. Is there something unclear
there? Naturally, a GPU task is work that runs on a GPU.

Can you please explain what is happening in this line?
>
> > -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1 -gputasks 0001
>
> I am seriously confused here. Also, the number of ranks is 8, while the
> number of threads is 6? Is -ntomp now specifying the _per-rank_ number of
> threads, i.e. the actual number of threads for this job would be 48?
>

Yes. The -ntomp, -nt, and -ntmpi options have always been different from
each other, and still work as they always did. See
http://manual.gromacs.org/documentation/2018-latest/user-guide/mdrun-performance.html.
There are 8 ranks. You specified one to do PME work, and for all PP work
and all PME work to go on GPUs, and that 7 of the GPU tasks go on GPU 0 and
one on GPU 1. Please check out the docs for each option rather than guess
:-)

Mark

Thank you,
>
> Alex
>
>
> On 2/9/2018 8:25 AM, Szilárd Páll wrote:
> > Hi,
> >
> > First of all,have you read the docs (admittedly somewhat brief):
> >
> http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks
> >
> > The current PME GPU was optimized for single-GPU runs. Using multiple
> GPUs
> > with PME offloaded works, but this mode hasn't been an optimization
> target
> > and it will often not give very good performance. Using multiple GPUs
> > requires a separate PME rank (as you have realized), only one can be used
> > (as we don't support PME decomposition on GPUs) and it comes some
> inherent
> > scaling drawbacks. For this reason, unless you _need_ your single run to
> be
> > as fast as possible, you'll be better off running multiple simulations
> > side-by side.
> >
> > A few tips for tuning the performance of a multi-GPU run with PME
> offload:
> > * expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks
> allow)
> > * generally it's best to use about the same decomposition that you'd use
> > with nonbonded-only offload, e.g. in your case 6-8 ranks
> > * map the GPU task alone or at most together with 1 PP rank to a GPU,
> i.e.
> > use the new -gputasks option
> > e.g. for your case I'd expect the following to work ~best:
> > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > -gputasks 0001
> > or
> > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > -gputasks 0011
> >
> >
> > Let me know if that gave some improvement.
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> > On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:
> >
> >> Hi list,
> >>
> >> I am trying out the new gromacs 2018 (really nice so far), but have a
> few
> >> questions about what command line options I should specify, specifically
> >> with the new gnu pme implementation.
> >>
> >> My computer has two CPUs (with 12 cores each, 24 with hyper threading)
> and
> >> two GPUs, and I currently (with 2018) start simulations like this:
> >>
> >> $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
> >> -gpu_id 01
> >>
> >> this works, but gromacs prints the message that 24 omp threads per mpi
> rank
> >> is likely inefficient. However, trying to reduce the number of omp
> threads
> >> I see a reduction in performance. Is this message no longer relevant
> with
> >> gpu pme or am I overlooking something?
> >>
> >> Thanks
> >> /PK
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/
> >> Support/Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Mark Abraham
On Fri, Feb 9, 2018, 16:57 Daniel Kozuch  wrote:

> Szilárd,
>
> If I may jump in on this conversation, I am having the reverse problem
> (which I assume others may encounter also) where I am attempting a large
> REMD run (84 replicas) and I have access to say 12 GPUs and 84 CPUs.
>
> Basically I have less GPUs than simulations. Is there a logical approach to
> using gputasks and other new options in GROMACS 2018 for this setup? I read
> through the available documentation,but as you mentioned it seems to be
> targeted for a single-GPU runs.
>


Try the example at
http://manual.gromacs.org/documentation/2018-latest/user-guide/mdrun-features.html#examples-running-multi-simulations

Mark

Thanks so much,
> Dan
>
>
>
> On Fri, Feb 9, 2018 at 10:27 AM, Szilárd Páll 
> wrote:
>
> > On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll 
> > wrote:
> >
> > > Hi,
> > >
> > > First of all,have you read the docs (admittedly somewhat brief):
> > > http://manual.gromacs.org/documentation/2018/user-guide/
> > > mdrun-performance.html#types-of-gpu-tasks
> > >
> > > The current PME GPU was optimized for single-GPU runs. Using multiple
> > GPUs
> > > with PME offloaded works, but this mode hasn't been an optimization
> > target
> > > and it will often not give very good performance. Using multiple GPUs
> > > requires a separate PME rank (as you have realized), only one can be
> used
> > > (as we don't support PME decomposition on GPUs) and it comes some
> > > inherent scaling drawbacks. For this reason, unless you _need_ your
> > single
> > > run to be as fast as possible, you'll be better off running multiple
> > > simulations side-by side.
> > >
> >
> > PS: You can of course also run on two GPUs and run two simulations
> > side-by-side (on half of the cores for each) to improve the overall
> > aggregate throughput you get out of the hardware.
> >
> >
> > >
> > > A few tips for tuning the performance of a multi-GPU run with PME
> > offload:
> > > * expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks
> > allow)
> > > * generally it's best to use about the same decomposition that you'd
> use
> > > with nonbonded-only offload, e.g. in your case 6-8 ranks
> > > * map the GPU task alone or at most together with 1 PP rank to a GPU,
> > i.e.
> > > use the new -gputasks option
> > > e.g. for your case I'd expect the following to work ~best:
> > > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > > -gputasks 0001
> > > or
> > > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > > -gputasks 0011
> > >
> > >
> > > Let me know if that gave some improvement.
> > >
> > > Cheers,
> > >
> > > --
> > > Szilárd
> > >
> > > On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:
> > >
> > >> Hi list,
> > >>
> > >> I am trying out the new gromacs 2018 (really nice so far), but have a
> > few
> > >> questions about what command line options I should specify,
> specifically
> > >> with the new gnu pme implementation.
> > >>
> > >> My computer has two CPUs (with 12 cores each, 24 with hyper threading)
> > and
> > >> two GPUs, and I currently (with 2018) start simulations like this:
> > >>
> > >> $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
> > >> -gpu_id 01
> > >>
> > >> this works, but gromacs prints the message that 24 omp threads per mpi
> > >> rank
> > >> is likely inefficient. However, trying to reduce the number of omp
> > threads
> > >> I see a reduction in performance. Is this message no longer relevant
> > with
> > >> gpu pme or am I overlooking something?
> > >>
> > >> Thanks
> > >> /PK
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at http://www.gromacs.org/Support
> > >> /Mailing_Lists/GMX-Users_List before posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > >> send a mail to gmx-users-requ...@gromacs.org.
> > >>
> > >
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 

Re: [gmx-users] Do i need to put POSITION RESTRAINT DURING EQUILIBRATION STAGE ( NVT ) if i am preparing an amorphous sample?

2018-02-09 Thread Krzysztof Makuch
Hi,
That's always your call. You perform equilibration to swap between MM
(energy minimization) and MD (kinetic energy). In the other word you slowly
start MD and prevent unwanted and unrealistic rearrangement in your system.
If initial positions are important - you restrain the interesting
molecules. If you equilibrate for example lipid bilayer - there is no
reason to do that. Often you can also totally skip equilibration and just
cut off a few initial ns. Sometimes you make nvt, npt and still cut off
beginning and sometimes you perform MD to find optimal system, in which
case the whole simulation is in fact equilibration.
You know your system, you have to consider if restraining movement is what
you need.
Best,
KM

2018-02-09 13:28 GMT+01:00 sanjeet kumar singh ch16d012 <
ch16d...@smail.iitm.ac.in>:

> Hi list,
>
> I am preparing an amorphous sample using GROMACS but i am in doubt that
> during the equilibration stage ( NVT & NPT ) do i need to put position
> restraint on my polymer as there are no solvent in my system and if i have
> to use position restraint then why i should do that?
>
> THANKS,
> SK
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Jagiellonian University
Department of Computational Biophysics and Bioinformatics
tel.1: (12) 664 61 49
tel.2: (48) 664 086 049
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 installation failed

2018-02-09 Thread Qinghua Liao

Hello Elton,

Thanks a lot for your help! I just tried to load a binutils library (it 
was installed on the cluster) and install Gromacs 2018 again,

it works now!


All the best,
Qinghua


On 02/09/2018 11:33 PM, Elton Carvalho wrote:

If you are in a hurry, you can download the binutils package from here
https://www.gnu.org/software/binutils/ and compile it on your own, setting
the PREFIX to a directory in your home, then use $PATH to make your binary
the highest priority.

Cheers,
Elton

On Fri, Feb 9, 2018 at 8:17 PM, Qinghua Liao  wrote:


Hello Elton,

Thanks a lot for your information, I already sent an e-mail to the
administrator,
hopefully they will fix it.


All the best,
Qinghua


On 02/09/2018 08:03 PM, Elton Carvalho wrote:


Hello, Qinghua,

The error message refers to the standard library. I believe the package
that provides this in ubuntu is glibc. Check that it's a current enough
version.

Another thing is that the liker (ld) needs to support C++11. That's the
binutils package. I've had success with version 2.29. Not sure which is
the
lowest version required.

Good luck,
Elton

On Fri, Feb 9, 2018 at 12:25 PM, Qinghua Liao 
wrote:

Dear GMX developers,

I am trying to install Gromacs2018 with cuda on clusters, the
installation
was successful on one cluster,
but failed on the other cluster. I guess there might be some library
missing on the other cluster.

For the succeeded one, the operating system is openSUSE 42.2 (GNU/Linux
4.4.27-2-default), the compilers are gcc and c++ 4.8.5,
the CUDA version is 9.0.176, the MPI is openMPI 1.10.3

For the failed one, the operating system is Ubuntu 16.04.3 (GNU/Linux
4.4.0-109-generic x86_64), I tried CUDA 9.1.85  and 9.0.176,
together with gcc/c++ version 6.4, icc/icpc 2017.4, all were failed. The
error is the same:


-- Performing Test CXX11_STDLIB_PRESENT
-- Performing Test CXX11_STDLIB_PRESENT - Failed
CMake Error at cmake/gmxTestCXX11.cmake:210 (message):
This version of GROMACS requires C++11-compatible standard library.
Please
use a newer compiler, and/or a newer standard library, or use the
GROMACS
5.1.x release.  Consult the installation guide for details before
upgrading
components.
Call Stack (most recent call first):
CMakeLists.txt:168 (gmx_test_cxx11)


Here is my command:
CC=gcc CXX=c++ .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON
-DCMAKE_INSTALL_PREFIX=/--PATH--/Programs/Gromacs2018

I am confused here that the old compilers worked but the new ones did
not,
while the error message suggests to use newer compilers.
Could some one help me with fixing it? Thanks a lot!


All the best,
Qinghua
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 installation failed

2018-02-09 Thread Elton Carvalho
If you are in a hurry, you can download the binutils package from here
https://www.gnu.org/software/binutils/ and compile it on your own, setting
the PREFIX to a directory in your home, then use $PATH to make your binary
the highest priority.

Cheers,
Elton

On Fri, Feb 9, 2018 at 8:17 PM, Qinghua Liao  wrote:

> Hello Elton,
>
> Thanks a lot for your information, I already sent an e-mail to the
> administrator,
> hopefully they will fix it.
>
>
> All the best,
> Qinghua
>
>
> On 02/09/2018 08:03 PM, Elton Carvalho wrote:
>
>> Hello, Qinghua,
>>
>> The error message refers to the standard library. I believe the package
>> that provides this in ubuntu is glibc. Check that it's a current enough
>> version.
>>
>> Another thing is that the liker (ld) needs to support C++11. That's the
>> binutils package. I've had success with version 2.29. Not sure which is
>> the
>> lowest version required.
>>
>> Good luck,
>> Elton
>>
>> On Fri, Feb 9, 2018 at 12:25 PM, Qinghua Liao 
>> wrote:
>>
>> Dear GMX developers,
>>>
>>> I am trying to install Gromacs2018 with cuda on clusters, the
>>> installation
>>> was successful on one cluster,
>>> but failed on the other cluster. I guess there might be some library
>>> missing on the other cluster.
>>>
>>> For the succeeded one, the operating system is openSUSE 42.2 (GNU/Linux
>>> 4.4.27-2-default), the compilers are gcc and c++ 4.8.5,
>>> the CUDA version is 9.0.176, the MPI is openMPI 1.10.3
>>>
>>> For the failed one, the operating system is Ubuntu 16.04.3 (GNU/Linux
>>> 4.4.0-109-generic x86_64), I tried CUDA 9.1.85  and 9.0.176,
>>> together with gcc/c++ version 6.4, icc/icpc 2017.4, all were failed. The
>>> error is the same:
>>>
>>>
>>> -- Performing Test CXX11_STDLIB_PRESENT
>>> -- Performing Test CXX11_STDLIB_PRESENT - Failed
>>> CMake Error at cmake/gmxTestCXX11.cmake:210 (message):
>>>This version of GROMACS requires C++11-compatible standard library.
>>> Please
>>>use a newer compiler, and/or a newer standard library, or use the
>>> GROMACS
>>>5.1.x release.  Consult the installation guide for details before
>>> upgrading
>>>components.
>>> Call Stack (most recent call first):
>>>CMakeLists.txt:168 (gmx_test_cxx11)
>>>
>>>
>>> Here is my command:
>>> CC=gcc CXX=c++ .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON
>>> -DCMAKE_INSTALL_PREFIX=/--PATH--/Programs/Gromacs2018
>>>
>>> I am confused here that the old compilers worked but the new ones did
>>> not,
>>> while the error message suggests to use newer compilers.
>>> Could some one help me with fixing it? Thanks a lot!
>>>
>>>
>>> All the best,
>>> Qinghua
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/Support
>>> /Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>
>>
>>
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
Elton Carvalho
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 installation failed

2018-02-09 Thread Qinghua Liao

Hello Elton,

Thanks a lot for your information, I already sent an e-mail to the 
administrator,

hopefully they will fix it.


All the best,
Qinghua

On 02/09/2018 08:03 PM, Elton Carvalho wrote:

Hello, Qinghua,

The error message refers to the standard library. I believe the package
that provides this in ubuntu is glibc. Check that it's a current enough
version.

Another thing is that the liker (ld) needs to support C++11. That's the
binutils package. I've had success with version 2.29. Not sure which is the
lowest version required.

Good luck,
Elton

On Fri, Feb 9, 2018 at 12:25 PM, Qinghua Liao 
wrote:


Dear GMX developers,

I am trying to install Gromacs2018 with cuda on clusters, the installation
was successful on one cluster,
but failed on the other cluster. I guess there might be some library
missing on the other cluster.

For the succeeded one, the operating system is openSUSE 42.2 (GNU/Linux
4.4.27-2-default), the compilers are gcc and c++ 4.8.5,
the CUDA version is 9.0.176, the MPI is openMPI 1.10.3

For the failed one, the operating system is Ubuntu 16.04.3 (GNU/Linux
4.4.0-109-generic x86_64), I tried CUDA 9.1.85  and 9.0.176,
together with gcc/c++ version 6.4, icc/icpc 2017.4, all were failed. The
error is the same:


-- Performing Test CXX11_STDLIB_PRESENT
-- Performing Test CXX11_STDLIB_PRESENT - Failed
CMake Error at cmake/gmxTestCXX11.cmake:210 (message):
   This version of GROMACS requires C++11-compatible standard library.
Please
   use a newer compiler, and/or a newer standard library, or use the GROMACS
   5.1.x release.  Consult the installation guide for details before
upgrading
   components.
Call Stack (most recent call first):
   CMakeLists.txt:168 (gmx_test_cxx11)


Here is my command:
CC=gcc CXX=c++ .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON
-DCMAKE_INSTALL_PREFIX=/--PATH--/Programs/Gromacs2018

I am confused here that the old compilers worked but the new ones did not,
while the error message suggests to use newer compilers.
Could some one help me with fixing it? Thanks a lot!


All the best,
Qinghua
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 installation failed

2018-02-09 Thread Elton Carvalho
Hello, Qinghua,

The error message refers to the standard library. I believe the package
that provides this in ubuntu is glibc. Check that it's a current enough
version.

Another thing is that the liker (ld) needs to support C++11. That's the
binutils package. I've had success with version 2.29. Not sure which is the
lowest version required.

Good luck,
Elton

On Fri, Feb 9, 2018 at 12:25 PM, Qinghua Liao 
wrote:

> Dear GMX developers,
>
> I am trying to install Gromacs2018 with cuda on clusters, the installation
> was successful on one cluster,
> but failed on the other cluster. I guess there might be some library
> missing on the other cluster.
>
> For the succeeded one, the operating system is openSUSE 42.2 (GNU/Linux
> 4.4.27-2-default), the compilers are gcc and c++ 4.8.5,
> the CUDA version is 9.0.176, the MPI is openMPI 1.10.3
>
> For the failed one, the operating system is Ubuntu 16.04.3 (GNU/Linux
> 4.4.0-109-generic x86_64), I tried CUDA 9.1.85  and 9.0.176,
> together with gcc/c++ version 6.4, icc/icpc 2017.4, all were failed. The
> error is the same:
>
>
> -- Performing Test CXX11_STDLIB_PRESENT
> -- Performing Test CXX11_STDLIB_PRESENT - Failed
> CMake Error at cmake/gmxTestCXX11.cmake:210 (message):
>   This version of GROMACS requires C++11-compatible standard library.
> Please
>   use a newer compiler, and/or a newer standard library, or use the GROMACS
>   5.1.x release.  Consult the installation guide for details before
> upgrading
>   components.
> Call Stack (most recent call first):
>   CMakeLists.txt:168 (gmx_test_cxx11)
>
>
> Here is my command:
> CC=gcc CXX=c++ .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON
> -DCMAKE_INSTALL_PREFIX=/--PATH--/Programs/Gromacs2018
>
> I am confused here that the old compilers worked but the new ones did not,
> while the error message suggests to use newer compilers.
> Could some one help me with fixing it? Thanks a lot!
>
>
> All the best,
> Qinghua
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.




-- 
Elton Carvalho
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Tests with Threadripper and dual gpu setup

2018-02-09 Thread Szilárd Páll
Hi,

Thanks for the report!

Did you build with or without hwloc? There is a known issue with the
automatic pin stride when not using hwloc which will lead to a "compact"
pinning (using half of the cores with 2 threads/core) when <=half of the
threads are launched (instead of using all cores 1 thread/core which is the
default on Intel).

When it comes to running "wide" ranks (i.e. many OpenMP threads per rank)
on Zen/Ryzen, things are not straightforward, so the default 16/32 threads
on 16 cores + 1 GPU is not great. If already running domain-decomposition,
4-8 threads/rank is generally best, but unfortunately this will often not
be better than just using no DD and taking the hit of threading
inefficiency.

A few more comments in-line.

On Wed, Jan 24, 2018 at 10:14 AM, Harry Mark Greenblatt <
harry.greenbl...@weizmann.ac.il> wrote:

> BS”D
>
> In case anybody is interested we have tested Gromacs on a Threadripper
> machine with two GPU’s.
>
> Hardware:
>
> Ryzen Threadripper 1950X 16 core CPU (multithreading on), with Corsair
> H100i V2 Liquid cooling
> Asus Prime X399-A M/B
> 2 X Geforce GTX 1080 GPU’s
> 32 GB of 3200MHz memory
> Samsung 850 Pro 512GB SSD
>
> OS, software:
>
> Centos 7.4, with 4.14 Kernel from ElRepo
> gcc 4.8.5 and gcc 5.5.0
> fftw 3.3.7 (AVX2 enabled)
> Cuda 8
> Gromacs 2016.4
> Gromacs 2018-rc1 and final 2018.
> Using thread-MPI
>
>
> I managed to compile gcc 5.5.0, but when I went to use it to compile
> Gromacs, the compiler could not recognise the hardware, although the native
> gcc 4.8.5 had no problem.
> In 2016.4, I was able to specify which SIMD set to use, so this was not an
> issue.   In any case there was very little difference between gcc 5.5.0 and
> 4.8.5.  So I used 4.8.5 for 2018.
> Any ideas how to overcome this problem with 5.5.0?
>
> 
> Gromacs 2016.4
> 
>
> System: Protein/DNA complex, with 438,397 atoms (including waters/ions),
> 100 ps npt equilibration.
>
> Allowing Gromacs to choose how it wanted to allocate the hardware gave
>
> 8 tMPI ranks, 4 thread per rank, both GPU’s
>
> 12.4 ns/day
>
> When I told it to use 4 tMPI ranks, 8 threads per rank, both GPU’s
>
> 12.2 ns/day
>
>
> Running on “real” cores only
>
> 4 tMPI ranks, 4 threads per rank, 2 GPU’s
>
> 10.2 ns/day
>
> 1 tMPI rank, 16 threads per rank, *one* GPU (“half” the machine; pin on,
> but pinstride and pinoffset automatic)
>
> 10.6 ns/day
>
> 1 tMP rank, 16 threads per rank, one gpu, and manually set all pinning
> options:
>
> gmx mdrun -v -deffnm test.npt -s test.npt.tpr -pin on -ntomp 16 -ntmpi 1
> -gpu_id 0 -pinoffset 0 -pinstride 2
>
> 12.3 ns/day
>
> Presumably, the gain here is because “pintstride 2” caused the job to run
> on the “real” (1,2,3…15) cores, and not on virtual cores.  The automatic
> pinstride above used cores [0,16], [1,17], [2,18]…[7,23], half of which are
> virtual and so gave only 10.6ns/day.
>
> ** So there very little gain from the second GPU, and very little gain
> from multithreading. **
>
> Using AVX_256 and not AVX2_256 with above command gave a small speed up
> (although using AVX instead of AVX2 for FFTW made things worse).
>
> 12.5 ns/day
>
>
> To compare with an Intel Xeon Silver system:
> 2 x Xeon Silver 4116 (2.1GHz base clock, 12 cores each, no
> Hyperthreading), 64GB memory
> 2 x Geforce 1080’s (as used in the above tests)
>
> gcc 4.8.5
> Gromacs 2016.4, with MPI, AVX_256 (compiled on an older GPU machine, and
> not by me).
>

AVX2_256 should give some benefit, but not a lot. (BTW, on Silver do not
use AVX_512, even on the Gold / 2FMA Skylake-X, when running with GPUs AVX2
tends to be is better.)


> 2 MPI ranks, 12 threads each rank, 2 GPU’s
>
> 11.7 ns/day
>
> 4 MPI ranks, 6 threads each rank, 2 GPU’s
>
> 13.0 ns/day
>
> 6 MPI ranks, 4 threads each rank, 2 GPU’s
>
> 14.0 ns/day
>

Similar effect as noted wrt Ryzen.


>
> To compare with the AMD machine, same number of cores
>
> 1 MPI rank, 16 threads, 1 GPU
>
> 11.2 ns/day
>

(Side-note: a bit apples and oranges comparison, isn't it?)


>
> —
> Gromacs 2018 rc1 (using gcc 4.8.5)
> —
>
> Using AVX_256
>

You should be using AVX2_128 or AVX2_256 or Zen! The former will be fastest
in CPU-only runs, the latter can often be (a bit) faster in GPU accelerated
runs.


>
> In ‘classic’ mode, not using gpu for PME
>
> 8 tMPI ranks, 4 threads per rank, 2 GPU’s
>
> 12.7 ns/day (modest speed up from 12.4 ns/day with 2016.4)
>
> Now use a gpu for PME
>
> gmx mdrun -v -deffnm test.npt -s test.npt.tpr -pme gpu -pin on
>
> used 1 tMPI rank, 32 OpenMP threads, 1 GPU
>
> 14.9 ns/day
>
> Forcing the program to use both GPU’s
>
> gmx mdrun -v -deffnm test.npt -s test.npt.tpr -pme gpu -pin on -ntmpi 4
> -npme 1 -gputasks 0011 -nb gpu
>
> 18.5 ns/day
>
> Now with AVX2_128
>
> 19.0 ns/day
>
> Now force Dynamic Load Balancing
>
> gmx mdrun -v -deffnm test.npt -s test.npt.tpr -pme gpu -pin on -ntmpi 4
> -npme 1 -gputasks 0011 -nb gpu -dlb yes
>

I would recommend *against* doing 

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Alex
Just to quickly jump in, because Mark suggested taken a look at the 
latest doc and unfortunately I must admit that I didn't understand what 
I read. I appear to be especially struggling with the idea of gputasks.


Can you please explain what is happening in this line?


-pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1 -gputasks 0001


I am seriously confused here. Also, the number of ranks is 8, while the number 
of threads is 6? Is -ntomp now specifying the _per-rank_ number of threads, 
i.e. the actual number of threads for this job would be 48?

Thank you,

Alex


On 2/9/2018 8:25 AM, Szilárd Páll wrote:

Hi,

First of all,have you read the docs (admittedly somewhat brief):
http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks

The current PME GPU was optimized for single-GPU runs. Using multiple GPUs
with PME offloaded works, but this mode hasn't been an optimization target
and it will often not give very good performance. Using multiple GPUs
requires a separate PME rank (as you have realized), only one can be used
(as we don't support PME decomposition on GPUs) and it comes some inherent
scaling drawbacks. For this reason, unless you _need_ your single run to be
as fast as possible, you'll be better off running multiple simulations
side-by side.

A few tips for tuning the performance of a multi-GPU run with PME offload:
* expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks allow)
* generally it's best to use about the same decomposition that you'd use
with nonbonded-only offload, e.g. in your case 6-8 ranks
* map the GPU task alone or at most together with 1 PP rank to a GPU, i.e.
use the new -gputasks option
e.g. for your case I'd expect the following to work ~best:
gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
-gputasks 0001
or
gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
-gputasks 0011


Let me know if that gave some improvement.

Cheers,

--
Szilárd

On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:


Hi list,

I am trying out the new gromacs 2018 (really nice so far), but have a few
questions about what command line options I should specify, specifically
with the new gnu pme implementation.

My computer has two CPUs (with 12 cores each, 24 with hyper threading) and
two GPUs, and I currently (with 2018) start simulations like this:

$ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
-gpu_id 01

this works, but gromacs prints the message that 24 omp threads per mpi rank
is likely inefficient. However, trying to reduce the number of omp threads
I see a reduction in performance. Is this message no longer relevant with
gpu pme or am I overlooking something?

Thanks
/PK
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] restoring pullf.xvg file

2018-02-09 Thread Nick Johans
 Hi,

I deleted the pullf.xvg file unintentionally. Is there anyway to restore
and reproduce it from other outputs?

Best regards
-Nick
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Domain decomposition for parallel simulations

2018-02-09 Thread Kevin C Chan
Dear Users,

I have encountered the problem of "There is no domain decomposition for n
nodes that is compatible with the given box and a minimum cell size of x
nm" and by reading through the gromacs website and some threads I
understand that the problem might be caused by breaking the system into too
small boxes by too many ranks. However, I have no idea how to get the
correct estimation of suitable paralleling parameters. Hope someone could
share his experience.

Here are information stated in the log file:
*Initializing Domain Decomposition on 4000 ranks*
*Dynamic load balancing: on*
*Will sort the charge groups at every domain (re)decomposition*
*Initial maximum inter charge-group distances:*
*two-body bonded interactions: 0.665 nm, Dis. Rest., atoms 23558 23590*
*  multi-body bonded interactions: 0.425 nm, Proper Dih., atoms 12991 12999*
*Minimum cell size due to bonded interactions: 0.468 nm*
*Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819
nm*
*Estimated maximum distance required for P-LINCS: 0.819 nm*
*This distance will limit the DD cell size, you can override this with
-rcon*
*Guess for relative PME load: 0.11*
*Will use 3500 particle-particle and 500 PME only ranks*
*This is a guess, check the performance at the end of the log file*
*Using 500 separate PME ranks, as guessed by mdrun*
*Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25*
*Optimizing the DD grid for 3500 cells with a minimum initial size of 1.024
nm*
*The maximum allowed number of cells is: X 17 Y 17 Z 17*

And I got this afterwards:
*Fatal error:*
*There is no domain decomposition for 3500 ranks that is compatible with
the given box and a minimum cell size of 1.02425 nm*

Here are some questions:
1. the maximum allowed number of cells is 17x17x17 which is 4913 and seems
to be larger than the requested 3500 particle-particle ranks, so the
minimum cell size is not causing the problem?
2. Where does this 1.024 nm comes from? We can see the inter charge-group
distances are listed as 0.665 and 0.425 nm
3. The distance restraint between atoms 23558 23590 was set explicitly (or
added manually) in the topology file and should be around 0.32 nm by using
[intermolecular_interactions]. How could I know my manual setting is
working or not? As it has shown a different value.


Thanks in advance,
Kevin
OSU
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Daniel Kozuch
Szilárd,

If I may jump in on this conversation, I am having the reverse problem
(which I assume others may encounter also) where I am attempting a large
REMD run (84 replicas) and I have access to say 12 GPUs and 84 CPUs.

Basically I have less GPUs than simulations. Is there a logical approach to
using gputasks and other new options in GROMACS 2018 for this setup? I read
through the available documentation,but as you mentioned it seems to be
targeted for a single-GPU runs.

Thanks so much,
Dan



On Fri, Feb 9, 2018 at 10:27 AM, Szilárd Páll 
wrote:

> On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll 
> wrote:
>
> > Hi,
> >
> > First of all,have you read the docs (admittedly somewhat brief):
> > http://manual.gromacs.org/documentation/2018/user-guide/
> > mdrun-performance.html#types-of-gpu-tasks
> >
> > The current PME GPU was optimized for single-GPU runs. Using multiple
> GPUs
> > with PME offloaded works, but this mode hasn't been an optimization
> target
> > and it will often not give very good performance. Using multiple GPUs
> > requires a separate PME rank (as you have realized), only one can be used
> > (as we don't support PME decomposition on GPUs) and it comes some
> > inherent scaling drawbacks. For this reason, unless you _need_ your
> single
> > run to be as fast as possible, you'll be better off running multiple
> > simulations side-by side.
> >
>
> PS: You can of course also run on two GPUs and run two simulations
> side-by-side (on half of the cores for each) to improve the overall
> aggregate throughput you get out of the hardware.
>
>
> >
> > A few tips for tuning the performance of a multi-GPU run with PME
> offload:
> > * expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks
> allow)
> > * generally it's best to use about the same decomposition that you'd use
> > with nonbonded-only offload, e.g. in your case 6-8 ranks
> > * map the GPU task alone or at most together with 1 PP rank to a GPU,
> i.e.
> > use the new -gputasks option
> > e.g. for your case I'd expect the following to work ~best:
> > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > -gputasks 0001
> > or
> > gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> > -gputasks 0011
> >
> >
> > Let me know if that gave some improvement.
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> > On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:
> >
> >> Hi list,
> >>
> >> I am trying out the new gromacs 2018 (really nice so far), but have a
> few
> >> questions about what command line options I should specify, specifically
> >> with the new gnu pme implementation.
> >>
> >> My computer has two CPUs (with 12 cores each, 24 with hyper threading)
> and
> >> two GPUs, and I currently (with 2018) start simulations like this:
> >>
> >> $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
> >> -gpu_id 01
> >>
> >> this works, but gromacs prints the message that 24 omp threads per mpi
> >> rank
> >> is likely inefficient. However, trying to reduce the number of omp
> threads
> >> I see a reduction in performance. Is this message no longer relevant
> with
> >> gpu pme or am I overlooking something?
> >>
> >> Thanks
> >> /PK
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at http://www.gromacs.org/Support
> >> /Mailing_Lists/GMX-Users_List before posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GPU load from nvidia-smi

2018-02-09 Thread Szilárd Páll
On Thu, Feb 8, 2018 at 10:20 PM, Mark Abraham 
wrote:

> Hi,
>
> On Thu, Feb 8, 2018 at 8:50 PM Alex  wrote:
>
> > Got it, thanks. Even with the old style input I now have a 42% speed up
> > with PME on GPU. How, how can I express my enormous gratitude?!
> >
>
> Do the science, cite the papers, spread the word, help others, make quality
> bug reports :-) Glad you like it!
>

A few more things to add: participate in the community! E.g.

- help us with early testing (e.g. when we release a beta or release
candidate we generally get extremely limited interest in testing which
would help in ironing out issues early before the actual release)

- give us feedback what works and what does not work so well; it's easy for
developers to be biased by their own or their most vocally complaining
close fried's personal preferences (for features, user interface, command
line function, etc.)

- share your knowledge on the mailing list

Cheers,
--
Szilárd


>
> Mark
>
> On Thu, Feb 8, 2018 at 12:44 PM, Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > Yes. Note the new use of -gputasks. And perhaps check out
> > > http://manual.gromacs.org/documentation/2018-latest/
> > > user-guide/mdrun-performance.html#types-of-gpu-tasks
> > > because
> > > things are now different.
> > >
> > > gmx mdrun -ntmpi 3 -npme 1 -nb gpu -pme gpu is more like what you want.
> > >
> > > Mark
> > >
> > > On Thu, Feb 8, 2018 at 8:36 PM Alex  wrote:
> > >
> > > > I think this should be a separate question, given all the recent mess
> > > with
> > > > the utils tests...
> > > >
> > > > I am testing mdrun (v 2018) on a system that's trivial and close to
> a 5
> > > x 5
> > > > x 5 box filled with water and some ions. We have three GPUs and the
> run
> > > is
> > > > with -nt 18 -gpu_id 012 -pme -gpu.
> > > >
> > > > nvidia-smi reports 65% load on 0 and nothing on 1 and 2. Is this
> > normal?
> > > >
> > > > Thanks,
> > > >
> > > > Alex
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
On Fri, Feb 9, 2018 at 4:25 PM, Szilárd Páll  wrote:

> Hi,
>
> First of all,have you read the docs (admittedly somewhat brief):
> http://manual.gromacs.org/documentation/2018/user-guide/
> mdrun-performance.html#types-of-gpu-tasks
>
> The current PME GPU was optimized for single-GPU runs. Using multiple GPUs
> with PME offloaded works, but this mode hasn't been an optimization target
> and it will often not give very good performance. Using multiple GPUs
> requires a separate PME rank (as you have realized), only one can be used
> (as we don't support PME decomposition on GPUs) and it comes some
> inherent scaling drawbacks. For this reason, unless you _need_ your single
> run to be as fast as possible, you'll be better off running multiple
> simulations side-by side.
>

PS: You can of course also run on two GPUs and run two simulations
side-by-side (on half of the cores for each) to improve the overall
aggregate throughput you get out of the hardware.


>
> A few tips for tuning the performance of a multi-GPU run with PME offload:
> * expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks allow)
> * generally it's best to use about the same decomposition that you'd use
> with nonbonded-only offload, e.g. in your case 6-8 ranks
> * map the GPU task alone or at most together with 1 PP rank to a GPU, i.e.
> use the new -gputasks option
> e.g. for your case I'd expect the following to work ~best:
> gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> -gputasks 0001
> or
> gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
> -gputasks 0011
>
>
> Let me know if that gave some improvement.
>
> Cheers,
>
> --
> Szilárd
>
> On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:
>
>> Hi list,
>>
>> I am trying out the new gromacs 2018 (really nice so far), but have a few
>> questions about what command line options I should specify, specifically
>> with the new gnu pme implementation.
>>
>> My computer has two CPUs (with 12 cores each, 24 with hyper threading) and
>> two GPUs, and I currently (with 2018) start simulations like this:
>>
>> $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
>> -gpu_id 01
>>
>> this works, but gromacs prints the message that 24 omp threads per mpi
>> rank
>> is likely inefficient. However, trying to reduce the number of omp threads
>> I see a reduction in performance. Is this message no longer relevant with
>> gpu pme or am I overlooking something?
>>
>> Thanks
>> /PK
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018 and GPU PME

2018-02-09 Thread Szilárd Páll
Hi,

First of all,have you read the docs (admittedly somewhat brief):
http://manual.gromacs.org/documentation/2018/user-guide/mdrun-performance.html#types-of-gpu-tasks

The current PME GPU was optimized for single-GPU runs. Using multiple GPUs
with PME offloaded works, but this mode hasn't been an optimization target
and it will often not give very good performance. Using multiple GPUs
requires a separate PME rank (as you have realized), only one can be used
(as we don't support PME decomposition on GPUs) and it comes some inherent
scaling drawbacks. For this reason, unless you _need_ your single run to be
as fast as possible, you'll be better off running multiple simulations
side-by side.

A few tips for tuning the performance of a multi-GPU run with PME offload:
* expect to get at best 1.5 scaling to 2 GPUs (rarely 3 if the tasks allow)
* generally it's best to use about the same decomposition that you'd use
with nonbonded-only offload, e.g. in your case 6-8 ranks
* map the GPU task alone or at most together with 1 PP rank to a GPU, i.e.
use the new -gputasks option
e.g. for your case I'd expect the following to work ~best:
gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
-gputasks 0001
or
gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 8 -ntomp 6 -npme 1
-gputasks 0011


Let me know if that gave some improvement.

Cheers,

--
Szilárd

On Fri, Feb 9, 2018 at 8:51 AM, Gmx QA  wrote:

> Hi list,
>
> I am trying out the new gromacs 2018 (really nice so far), but have a few
> questions about what command line options I should specify, specifically
> with the new gnu pme implementation.
>
> My computer has two CPUs (with 12 cores each, 24 with hyper threading) and
> two GPUs, and I currently (with 2018) start simulations like this:
>
> $ gmx mdrun -v -deffnm md -pme gpu -nb gpu -ntmpi 2 -npme 1 -ntomp 24
> -gpu_id 01
>
> this works, but gromacs prints the message that 24 omp threads per mpi rank
> is likely inefficient. However, trying to reduce the number of omp threads
> I see a reduction in performance. Is this message no longer relevant with
> gpu pme or am I overlooking something?
>
> Thanks
> /PK
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-09 Thread Szilárd Páll
Great to hear!

(Also note that one thing we have explicitly focused on is not only peak
performance, but to get as close to peak as possible with just a few CPU
cores! You should be able to get >75% perf with just 3-5 Xeon or 2-3
desktop cores rather than needing a full fast CPU.)

--
Szilárd

On Thu, Feb 8, 2018 at 8:44 PM, Alex  wrote:

> With -pme gpu, I am reporting 383.032 ns/day vs 270 ns/day with the 2016.4
> version. I _did not_ mistype. The system is close to a cubic box of water
> with some ions.
>
> Incredible.
>
> Alex
>
> On Thu, Feb 8, 2018 at 12:27 PM, Szilárd Páll 
> wrote:
>
> > Note that the actual mdrun performance need not be affected both of it's
> > it's a driver persistence issue (you'll just see a few seconds lag at
> mdrun
> > startup) or some other CUDA application startup-related lag (an mdrun run
> > does mostly very different kind of things than this set of particular
> unit
> > tests).
> >
> > --
> > Szilárd
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Tool for molecule drawing

2018-02-09 Thread Momin Ahmad

Hello,

does anybody know of a software (with gui!) that allows to draw and 
assign residue names atom per atom? For example display a molecule with 
the gui and clicking on the atoms to define the residue name.


Thanks in advance,
Momin


--
Momin Ahmad

Karlsruhe Institute of Technology (KIT)
Steinbuch Centre for Computing (SCC)
Hermann-von-Helmholtz-Platz 1
76344 Eggenstein-Leopoldshafen
Phone: +49 721 608-24286
E-Mail: momin.ah...@kit.edu

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs 2018 installation failed

2018-02-09 Thread Qinghua Liao

Dear GMX developers,

I am trying to install Gromacs2018 with cuda on clusters, the 
installation was successful on one cluster,
but failed on the other cluster. I guess there might be some library 
missing on the other cluster.


For the succeeded one, the operating system is openSUSE 42.2 (GNU/Linux 
4.4.27-2-default), the compilers are gcc and c++ 4.8.5,

the CUDA version is 9.0.176, the MPI is openMPI 1.10.3

For the failed one, the operating system is Ubuntu 16.04.3 (GNU/Linux 
4.4.0-109-generic x86_64), I tried CUDA 9.1.85  and 9.0.176,
together with gcc/c++ version 6.4, icc/icpc 2017.4, all were failed. The 
error is the same:



-- Performing Test CXX11_STDLIB_PRESENT
-- Performing Test CXX11_STDLIB_PRESENT - Failed
CMake Error at cmake/gmxTestCXX11.cmake:210 (message):
  This version of GROMACS requires C++11-compatible standard library.  
Please

  use a newer compiler, and/or a newer standard library, or use the GROMACS
  5.1.x release.  Consult the installation guide for details before 
upgrading

  components.
Call Stack (most recent call first):
  CMakeLists.txt:168 (gmx_test_cxx11)


Here is my command:
CC=gcc CXX=c++ .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON 
-DCMAKE_INSTALL_PREFIX=/--PATH--/Programs/Gromacs2018


I am confused here that the old compilers worked but the new ones did 
not, while the error message suggests to use newer compilers.

Could some one help me with fixing it? Thanks a lot!


All the best,
Qinghua
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Fwd: MMPBSA

2018-02-09 Thread RAHUL SURESH
Dear all

I have carried out a protein ligand simulation for 50ns and performed a
PBSA calculation for 10-20ns trajectory. I get a positive binding energy.
How can I tackle it..?

Thank you



-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Do i need to put POSITION RESTRAINT DURING EQUILIBRATION STAGE ( NVT ) if i am preparing an amorphous sample?

2018-02-09 Thread sanjeet kumar singh ch16d012
Hi list,

I am preparing an amorphous sample using GROMACS but i am in doubt that
during the equilibration stage ( NVT & NPT ) do i need to put position
restraint on my polymer as there are no solvent in my system and if i have
to use position restraint then why i should do that?

THANKS,
SK
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding Beta-alanine structure

2018-02-09 Thread Dilip H N
Hello,
I have got the zwitterion structure of beta-alanine from ChEBI website, and
the ChEBI Id is *CHEBI:57966.*
I downloaded in mol2 format and tried with the CGenFF and swissparam servers
to get the charmm FF. but i have got two different charges for each of the
case.

In case of CGenFF the charges i got by uploading the zwitterionic
beta-alanine.mol2:

ESI *0.000 ! param penalty=   4.000 ; charge penalty=  12.737
GROUP! CHARGE   CH_PENALTY
ATOM N  NG3P3  -0.299 !2.500
ATOM C1 CG324   0.127 !   10.269
ATOM C2 CG321  -0.245 !   12.737
ATOM O1 OG2D2  -0.760 !0.850
ATOM C3 CG2O3   0.587 !   12.403
ATOM O2 OG2D2  -0.760 !0.850
ATOM H1 HGP20.330 !0.000
ATOM H2 HGP20.330 !0.000
ATOM H3 HGP20.330 !0.000
ATOM H4 HGA20.090 !2.500
ATOM H5 HGA20.090 !2.500
ATOM H6 HGA20.090 !0.000
ATOM H7 HGA20.090 !0.000

In case of SwissParam the charges i got by uploading the zwitterionic
beta-alanine.mol2:
[ atoms ]
; nr type resnr resid atom cgnr charge mass
   1 NRP  1  LIG N1   1 -0.8530  14.0067
   2 CR1  LIG C1   2  0.5030  12.0110
   3 CR1  LIG C2   3 -0.1060  12.0110
   4 O2CM   1  LIG O1   4 -0.9000  15.9994
   5 CO2M   1  LIG C3   5  0.9060  12.0110
   6 O2CM   1  LIG O2   6 -0.9000  15.9994
   7 HNRP   1  LIG H1   7  0.4500   1.0079
   8 HNRP   1  LIG H2   8  0.4500   1.0079
   9 HNRP   1  LIG H3   9  0.4500   1.0079
  10 HCMM 1  LIG H4 10  0.   1.0079
  11 HCMM 1  LIG H5 11  0.   1.0079
  12 HCMM 1  LIG H6 12  0.   1.0079
  13 HCMM 1  LIG H7 13 -0.   1.0079

So, how can i validate that which charges for the molecule, is correct..??
Which one is correct one to run the simulation in gromacs (using charmm36
FF)..??

Any suggestions are appreciated.
‌
 Sent with Mailtrack


On Thu, Feb 8, 2018 at 1:37 PM, Mark Abraham 
wrote:

> Hi,
>
> You should start with the original literature and/or CHARMM forcefield
> distribution for its documentation. That wasn't ported to the force field
> files one can use with GROMACS.
>
> Mark
>
> On Thu, Feb 8, 2018 at 7:19 AM Dilip H N 
> wrote:
>
> > Hello,
> > I want to simulate beta-alanine amino-acid. But in charmm36 FF there are
> > four different names (three/four letter code) for ALA, ie., ALA, DALA,
> > ALAI, ALAO.
> > Out of this, i wanted to know which one corresponds to beta-alanine
> > structure..??
> >
> > I tried in Avogadro software and i could build only alanine structure,
> and
> > not beta-alanine. How can i get the pdb file of beta-alanine..?? any
> other
> > ways..?
> > So, can anybody help me regarding this..??
> >
> > Thank you.
> >
> > --
> > With Best Regards,
> >
> > DILIP.H.N
> > Ph.D. Student
> >
> >
> >
> >  Sent with Mailtrack
> > <
> > https://chrome.google.com/webstore/detail/mailtrack-for-gmail-inbox/
> ndnaehgpjlnokgebbaldlmgkapkpjkkb?utm_source=gmail_
> medium=signature_campaign=signaturevirality
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
With Best Regards,

DILIP.H.N
Ph.D Student
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.