Re: [gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
Sorry, but I got confused. I tried to follow the correct options to specify
the location of libraries, bu failed...


mahmood@cluster:build$ cmake ..
-DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1:/share/apps/computer/fftw-3.3.5
-DFFTWF_LIBRARY=/share/apps/computer/fftw-3.3.5/lib
-DFFTWF_INCLUDE_DIR=/share/apps/computer/fftw-3.3.5/include
-DBUILD_SHARED_LIBS=off -DGMX_BUILD_MDRUN_ONLY=on
...
-- checking for module 'fftw3f'
--   package 'fftw3f' not found
-- pkg-config could not detect fftw3f, trying generic detection
-- Looking for fftwf_plan_r2r_1d in /share/apps/computer/fftw-3.3.5/lib
WARNING: Target "cmTryCompileExec1705831031" requests linking to directory
"/share/apps/computer/fftw-3.3.5/lib".  Targets may link only to
libraries.  CMake is dropping the item.
-- Looking for fftwf_plan_r2r_1d in /share/apps/computer/fftw-3.3.5/lib -
not found
CMake Error at cmake/FindFFTW.cmake:100 (message):
  Could not find fftwf_plan_r2r_1d in /share/apps/computer/fftw-3.3.5/lib,
  take a look at the error message in
  /share/apps/chemistry/gromacs-5.1/build/CMakeFiles/CMakeError.log to find
  out what went wrong.  If you are using a static lib (.a) make sure you
have
  specified all dependencies of fftw3f in FFTWF_LIBRARY by hand (e.g.
  -DFFTWF_LIBRARY='/path/to/libfftw3f.so;/path/to/libm.so') !
Call Stack (most recent call first):
  cmake/gmxManageFFTLibraries.cmake:78 (find_package)
  CMakeLists.txt:666 (include)


-- Configuring incomplete, errors occurred!
See also
"/share/apps/chemistry/gromacs-5.1/build/CMakeFiles/CMakeOutput.log".
See also
"/share/apps/chemistry/gromacs-5.1/build/CMakeFiles/CMakeError.log".
You have changed variables that require your cache to be deleted.
Configure will be re-run and you may have to reset some variables.
The following variables have changed:
CMAKE_CXX_COMPILER= /share/apps/computer/openmpi-2.0.1/bin/mpic++

-- Generating done
-- Build files have been written to: /share/apps/chemistry/gromacs-5.1/build





mahmood@cluster:build$ ls /share/apps/computer/fftw-3.3.5/lib
libfftw3.a  libfftw3.la  libfftw3_mpi.a  libfftw3_mpi.la  pkgconfig






Unfortunately, there are no CMakeOutput.log and CMakeError.log in that
folder!
Can you please explain what is going wrong? Why it is saying that
CMAKE_CXX_COMPILER has been changed?


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
​OK. I ran

/share/apps/computer/cmake-3.2.3-Linux-x86_64/bin/cmake ..
-DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
-DBUILD_SHARED_LIBS=off

please note that I didn't use -DGMX_MPI=on
Just want to be sure that everything is fine. Do you agree with that?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
I got an error regarding fftw3. Maybe not related to GMX itself, but I
appreciate any comment for that.


root@cluster:build# /share/apps/computer/cmake-3.2.3-Linux-x86_64/bin/cmake
.. -DCMAKE_C_COMPILER=/share/ap
ps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
-DBUILD_SHARED_LIBS=off
-DFFTWF_LIBRARY=/share/apps/computer/fftw-3.3.5/lib/libfftw3_mpi.a
-DFFTWF_INCLUDE_DIR=/share/apps/computer/fftw-3.3.5/include
-- No compatible CUDA toolkit found (v4.0+), disabling native GPU
acceleration
-- Looking for fftwf_plan_r2r_1d in
/share/apps/computer/fftw-3.3.5/lib/libfftw3_mpi.a
-- Looking for fftwf_plan_r2r_1d in
/share/apps/computer/fftw-3.3.5/lib/libfftw3_mpi.a - not found
CMake Error at cmake/FindFFTW.cmake:100 (message):
  Could not find fftwf_plan_r2r_1d in
  /share/apps/computer/fftw-3.3.5/lib/libfftw3_mpi.a, take a look at the
  error message in
  /share/apps/chemistry/gromacs-5.1/build/CMakeFiles/CMakeError.log to find
  out what went wrong.  If you are using a static lib (.a) make sure you
have
  specified all dependencies of fftw3f in FFTWF_LIBRARY by hand (e.g.
  -DFFTWF_LIBRARY='/path/to/libfftw3f.so;/path/to/libm.so') !
Call Stack (most recent call first):





I don't understand the last message. Should I use shared libs only?



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
​Excuse me, what I understood from the manual is that
-DCMAKE_INSTALL_PREFIX is the same as --prefix in ./configure script. Do
you mean that I can give multiple location with that option? One for the
gromacs itself and the other for the MPI?

I mean
-DCMAKE_INSTALL_PREFIX=/share/apps/gromacs-5.1
-DCMAKE_INSTALL_PREFIX=/share/apps/openmpi-2.0.1



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
Hi,
I am trying to install gromacs-5.1 from the source. What are the proper
cmake options for the following things:

1- Installing to a custom location and not /usr/local
2- Using a customized installation of MPI and not /usr/local


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs build options

2016-10-08 Thread Mahmood Naderan
​OMPI-2.0.1 is installed on the system. I want to tell gromacs that mpifort
(or other wrappers) are in /share/apps/openmpi-2.0.1/bin and libraries are
in /share/apps/opnempi-2.0.1/lib

How can I tell that to cmake?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Multithread run issues

2016-10-09 Thread Mahmood Naderan
Hi,
Users issue the command "mdrun -v" and that will automatically read input
files in the working directory. There are two issue with that which I am
not aware of the solution.

1- How the number of cores can be changed?
2- Viewing the output of "top" command, it is saying that mdrun uses 400%
cpu. That mean 4 cores are occupied. Problem is that we prefer to see four
processes each consumes 100% cpu. Why? Because in the first situation, PBS
wrongly sees that one cores is used (while 4 cores are used), but in the
later, it will correctly sees 4 cores.

As I read the documentation, I think the replacement of "mdrun -v" should be

mpirun -np N mdrun -v

Am I correct?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs build options

2016-10-09 Thread Mahmood Naderan
Thanks. Got it.

Regards,
Mahmood



On Sun, Oct 9, 2016 at 2:36 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Great. But CMAKE_INSTALL_PREFIX and CMAKE_PREFIX_PATH are different things
> that do different jobs (specify where to install vs where to search for
> dependencies).
>
> Mark
>
> On Sun, Oct 9, 2016 at 12:20 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > Hi mark,
> > Thank you very much. In fact the following commands did the job
> >
> >
> > $ cmake ..  -DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/
> bin/mpicc
> > -DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
> > -DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
> > -DGMX_BUILD_OWN_FFTW=ON -DBUILD_SHARED_LIBS=off -DGMX_BUILD_MDRUN_ONLY=on
> > $ make
> > $ sudo make install
> >
> >
> > However, after "sudo make install", the bin/mdrun goes in to
> > /usr/local/gromacs and not what I specified in the PREFIX_PATH.
> >
> >
> > I am not a user of gromacs. But as a cluster admin, I installed gromacs
> for
> > the users.
> >
> > Thanks a lot.
> >
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] nt/ntmpi/ntomp

2016-10-22 Thread Mahmood Naderan
Hi,
What is the clear difference among nt, ntmpi and ntomp? I have built
gromacs with MPI suppport. Then I run

mpirun gmx_mpi mdrun

Simply I want to know, is it a good idea to use 'nt' for such command? Are
there some conflicts among them which degrade the performance? For example,
using 'nt' for mpirun degrades the performance. Is that correct?

How about ntomp?
I read the descriptions for them, but I expect to see a table that shows
which switch is valid/invalid for a platform.



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-22 Thread Mahmood Naderan
Well such message flooding  that is written in the log file at every step,
the network will be a bottleneck affecting the performance of the other jobs

Regards,
Mahmood



On Sat, Oct 22, 2016 at 3:42 PM, Mark Abraham 
wrote:

> Hi,
>
> This is an energy minimization. Its behaviour is to print every step, and
> if that's verbose, it doesn't matter for the number of steps that are
> typically required. A dynamical simulation has different needs and works
> differently.
>
> Mark
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-23 Thread Mahmood Naderan
Hi Mark,
So I changed the code (gromacs-5.1/src/gromacs/mdlib/minimize.cpp) like
this:


if (MASTER(cr))
{
if (bVerbose && ((++myCounter)%1==0))
{
fprintf(stderr, "Step=%5d, Dmax= %6.1e nm, Epot= %12.5e
Fmax= %11.5e, atom= %d%c",
count, ustep, s_try->epot, s_try->fmax,
s_try->a_fmax+1,
( (count == 0) || (s_try->epot < s_min->epot) ) ?
'\n' : '\r');
  ...


myCounter is similar to the variable count. Where count has been
initialized or incremented, I also did that for myCounter.

But this doesn't work! After a minute (where is the step is much more than
1) the program terminates



Steepest Descents:
   Tolerance (Fmax)   =  1.0e+01
   Number of steps=2

Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 10 (which may not be possible for your system).
It
stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.




Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-23 Thread Mahmood Naderan
OK thanks for the clarification.


Regards,
Mahmood



On Sun, Oct 23, 2016 at 1:29 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> I've said before that this is not a problem. Imagine you suppressed all
> 2 step outputs that wrote 80 characters of output each. That's 1.6 MB,
> which you could also suppress by piping terminal output to /dev/null. The
> minimization ran in around 100 seconds, so the load on the infrastructure
> was under 20 kb/sec. Can you name any workload on the cluster that produces
> less traffic?
>
> Mark
>
> On Sun, 23 Oct 2016 11:12 Mahmood Naderan <mahmood...@gmail.com> wrote:
>
> > Hi Mark,
> > So I changed the code (gromacs-5.1/src/gromacs/mdlib/minimize.cpp) like
> > this:
> >
> >
> > if (MASTER(cr))
> > {
> > if (bVerbose && ((++myCounter)%1==0))
> > {
> > fprintf(stderr, "Step=%5d, Dmax= %6.1e nm, Epot= %12.5e
> > Fmax= %11.5e, atom= %d%c",
> > count, ustep, s_try->epot, s_try->fmax,
> > s_try->a_fmax+1,
> > ( (count == 0) || (s_try->epot < s_min->epot) ) ?
> > '\n' : '\r');
> >   ...
> >
> >
> > myCounter is similar to the variable count. Where count has been
> > initialized or incremented, I also did that for myCounter.
> >
> > But this doesn't work! After a minute (where is the step is much more
> than
> > 1) the program terminates
> >
> >
> >
> > Steepest Descents:
> >Tolerance (Fmax)   =  1.0e+01
> >Number of steps=2
> >
> > Energy minimization has stopped, but the forces have not converged to the
> > requested precision Fmax < 10 (which may not be possible for your
> system).
> > It
> > stopped because the algorithm tried to make a new step whose size was too
> > small, or there was no change in the energy since last step. Either way,
> we
> > regard the minimization as converged to within the available machine
> > precision, given your starting configuration and EM parameters.
> >
> >
> >
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
it is interesting for me that I specified Verlet, but the log warns about
group.

mahmood@cluster:LPN$ grep -r cut-off .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:; nblist cut-off
./mdout.mdp:; long-range cut-off for switched potentials
./mdout.mdp:; cut-off lengths
./mdout.mdp:; Extension of the potential lookup tables beyond the cut-off
mahmood@cluster:LPN$ grep -r Verlet .
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:cutoff-scheme= Verlet
./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
per atom,
./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
./mdout.mdp:vdw-modifier = Potential-shift-Verlet




Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
​Here is what I did...
I changed the cutoff-method to Verlet as suggested by
http://www.gromacs.org/Documentation/Cut-off_schemes#
How_to_use_the_Verlet_scheme


Then I followed two scenarios:

1) On the frontend, where gromacs and openmpi have been installed, I ran

​mahmood@cluster:LPN$ date
Mon Oct 17 11:06:40 2016
mahmood@cluster:LPN$ /share/apps/computer/openmpi-2.0.1/bin/mpirun -np 2
/share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi -v
...
...
starting mdrun 'Protein in water'
5000 steps,  5.0 ps.
step 0
[cluster.scu.ac.ir:28044] 1 more process has sent help message
help-mpi-btl-base.txt / btl:no-nics
[cluster.scu.ac.ir:28044] Set MCA parameter "orte_base_help_aggregate" to 0
to see all help / error messages
imb F  0% step 100, will finish Tue Dec  6 11:41:44 2016
imb F  0% step 200, will finish Sun Dec  4 23:06:02 2016
^Cmahmood@cluster:LPN$ date
Mon Oct 17 11:07:01 2016


​So, roughly 21 seconds for about 200 steps. As I checked 'top' command,
two cpus were 100%. Full log is available at http://pastebin.com/CzViEmRb



2) I specified two nodes instead of the frontend. Two nodes have at least
one free core. So, one process on each of them is similar to the previous
scenario.

mahmood@cluster:LPN$ cat hosts.txt
compute-0-2
compute-0-1
mahmood@cluster:LPN$ date
Mon Oct 17 11:12:34 2016
mahmood@cluster:LPN$ /share/apps/computer/openmpi-2.0.1/bin/mpirun -np 2
--hostfile hosts.txt /share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi -v
...
...
starting mdrun 'Protein in water'
5000 steps,  5.0 ps.
step 0
^CKilled by signal 2.
Killed by signal 2.
mahmood@cluster:LPN$ date
Mon Oct 17 11:15:47 2016

So, roughly 3 minutes without any progress!! As I ssh'ed to compute-0-2,
the 'top' command shows

23153 mahmood   39  19  190m  15m 6080 R  1.3  0.0   0:00.39 mdrun_mpi
23154 mahmood   39  19  190m  16m 5700 R  1.3  0.0   0:00.39 mdrun_mpi

And that is very very low cpu utilization. Please see the log at
http://pastebin.com/MZbjK4vD



Any idea is welcomed.
​




Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
OK. I verified that the cutoff parameter inside the trp file is Group

mahmood@cluster:gromacs-5.1$ ./bin/gmx_mpi dump -s ~/LPN/topol.tpr | grep
cutoff
...
Note: file tpx version 83, software tpx version 103
   cutoff-scheme  = Group





Now, according to this reply (by you Mark)
https://www.mail-archive.com/gmx-users@gromacs.org/msg63038.html, it
*should be* possible to overwrite the cutoff parameter of the binary file
as I run the mdrun_mpi (similar to nsteps).

Could you please tell me, what is the correct option for that? I didn't
find any relevant option for mdrun_mpi.





Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
Meanwhile, I have been confused with one thing!
If I build Gromacs with -DGMX_BUILD_MDRUN_ONLY=on, then I cannot see
gmx_mpi.
On the other hand, if I remove that switch, then I see gmx_mpi but there no
mdrun_mpi.

Can you please clarify that or give me the right document for that. The
documentation is really rich and that is confusing for a starter :(


Regards,
Mahmood



On Fri, Oct 21, 2016 at 4:33 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> No, that's not general. Your approach is right - edit your .mdp file and
> re-run grompp. Just pay closer attention ;-)
>
> Mark
>
> On Fri, Oct 21, 2016 at 2:49 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > OK. I verified that the cutoff parameter inside the trp file is Group
> >
> > mahmood@cluster:gromacs-5.1$ ./bin/gmx_mpi dump -s ~/LPN/topol.tpr |
> grep
> > cutoff
> > ...
> > Note: file tpx version 83, software tpx version 103
> >cutoff-scheme  = Group
> >
> >
> >
> >
> >
> > Now, according to this reply (by you Mark)
> > https://www.mail-archive.com/gmx-users@gromacs.org/msg63038.html, it
> > *should be* possible to overwrite the cutoff parameter of the binary file
> > as I run the mdrun_mpi (similar to nsteps).
> >
> > Could you please tell me, what is the correct option for that? I didn't
> > find any relevant option for mdrun_mpi.
> >
> >
> >
> >
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
OK thank you very much. I got it.

I ran

bin/gmx_mpi grompp

and it rebuilds the tpr file. Then I verified that binary tpr and saw the
Verlet. Then I ran

mpirun -np 4  bin/gmx_mpi mdrun -v -ntomp 2

without any error.

Thanks.


Regards,
Mahmood



On Fri, Oct 21, 2016 at 5:02 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> What's the question? The function of that switch is to do exactly what you
> observe :-) See
> http://manual.gromacs.org/documentation/2016/install-
> guide/index.html#building-only-mdrun
> There's
> very little value in gmx_mpi that isn't served by mdrun_mpi.
>
> Mark
>
> On Fri, Oct 21, 2016 at 3:14 PM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > Meanwhile, I have been confused with one thing!
> > If I build Gromacs with -DGMX_BUILD_MDRUN_ONLY=on, then I cannot see
> > gmx_mpi.
> > On the other hand, if I remove that switch, then I see gmx_mpi but there
> no
> > mdrun_mpi.
> >
> > Can you please clarify that or give me the right document for that. The
> > documentation is really rich and that is confusing for a starter :(
> >
> >
> > Regards,
> > Mahmood
> >
> >
> >
> > On Fri, Oct 21, 2016 at 4:33 PM, Mark Abraham <mark.j.abra...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > No, that's not general. Your approach is right - edit your .mdp file
> and
> > > re-run grompp. Just pay closer attention ;-)
> > >
> > > Mark
> > >
> > > On Fri, Oct 21, 2016 at 2:49 PM Mahmood Naderan <mahmood...@gmail.com>
> > > wrote:
> > >
> > > > OK. I verified that the cutoff parameter inside the trp file is Group
> > > >
> > > > mahmood@cluster:gromacs-5.1$ ./bin/gmx_mpi dump -s ~/LPN/topol.tpr |
> > > grep
> > > > cutoff
> > > > ...
> > > > Note: file tpx version 83, software tpx version 103
> > > >cutoff-scheme  = Group
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Now, according to this reply (by you Mark)
> > > > https://www.mail-archive.com/gmx-users@gromacs.org/msg63038.html, it
> > > > *should be* possible to overwrite the cutoff parameter of the binary
> > file
> > > > as I run the mdrun_mpi (similar to nsteps).
> > > >
> > > > Could you please tell me, what is the correct option for that? I
> didn't
> > > > find any relevant option for mdrun_mpi.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Regards,
> > > > Mahmood
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/
> > > Support/Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
One more question.
Currently, gromacs prints the output on every step!

Step=  159, Dmax= 3.8e-03 nm, Epot= -8.39592e+05 Fmax= 2.45536e+03, atom=
2111
Step=  160, Dmax= 4.6e-03 nm, Epot= -8.39734e+05 Fmax= 4.30685e+03, atom=
2111
Step=  161, Dmax= 5.5e-03 nm, Epot= -8.39913e+05 Fmax= 3.78613e+03, atom=
2111

Can you please tell me how to increase the print step? What is the
appropriate switch for that?


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
Hi,
I have specified Verlet in the mdp files according to the manual. However,
as I run mdrun_mpi with ntomp switch, it says that that the cut-off is
Group.



mahmood@cluster:LPN$ ls *.mdp
grompp.mdp  md100.mdp  mdout.mdp  rest.mdp
mahmood@cluster:LPN$ grep -r Verlet .
./grompp.mdp:cutoff-scheme   = Verlet
./rest.mdp:cutoff-scheme   = Verlet
./md100.mdp:cutoff-scheme   = Verlet
./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
based cut-offs)
./mdout.mdp:cutoff-scheme= Verlet
./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
per atom,
./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
./mdout.mdp:vdw-modifier = Potential-shift-Verlet
mahmood@cluster:LPN$ /share/apps/computer/openmpi-2.0.1/bin/mpirun -np 4
/share/apps/chemistry/gromacs-5.1/bin/mdrun_mpi -v -ntomp 2
...
Program mdrun_mpi, VERSION 5.1
Source code file:
/share/apps/chemistry/gromacs-5.1/src/programs/mdrun/resource-division.cpp,
line: 746

Fatal error:
OpenMP threads have been requested with cut-off scheme Group, but these are
only supported with cut-off scheme Verlet




Where did I miss to set that option? Any idea about that?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Switching Group to Verlet

2016-10-21 Thread Mahmood Naderan
Excuse me... this is a better output that shows the inconsistency

mahmood@cluster:LPN$ grep -r cutoff .
./grompp.mdp:cutoff-scheme   = Verlet
./rest.mdp:cutoff-scheme   = Verlet
./md100.mdp:cutoff-scheme   = Verlet
./md.log:   cutoff-scheme  = Group
./mdout.mdp:cutoff-scheme= Verlet


Where is the code, the scheme has been changed to Group?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Multithread run issues

2016-10-10 Thread Mahmood Naderan
Sorry for the previous incomplete email.

Program mdrun, VERSION 5.1
Source code file:
/share/apps/chemistry/gromacs-5.1/src/programs/mdrun/resource-division.cpp,
line: 746

Fatal error:
OpenMP threads have been requested with cut-off scheme Group, but these are
only supported with cut-off scheme Verlet
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



I read that document from that web site but didn't understand what is the
issue!
Thanks

Regards,
Mahmood



On Mon, Oct 10, 2016 at 2:29 PM, Mahmood Naderan <mahmood...@gmail.com>
wrote:

> >mpirun -np 1 mdrun_mpi -v -ntomp 2
>
> Agree with that.
>
> >This is not a problem to solve by running applications differently. You
> >will have users running jobs with single ranks/processes that use
> threading
> >of various kinds to fill the cores. That's a feature, not a bug. Either
> >configure PBS to cope with decades-old technology, or don't worry about
> it.
>
> I found this document
> ​(
> https://wiki.anl.gov/cnm/HPC/Submitting_and_Managing_Jobs/
> Advanced_node_selection#Multithreading_using_OpenMP)
> ​That is what I want to be sure that number of threads and cores used for
> gromacs fits to the PBS stats.
>
> Instead of the variable, I wrote
>
>
>
> #PBS -l nodes=1:ppn=2​
> export OMP_NUM_THREADS=2
> mpirun  mdrun -v
>
> So that will use two cores with 4 threads totally and PBS should report 4
> processors are occupied.
> However, gromacs failed with the following error
>
>
>
>
> Regards,
> Mahmood
>
>
>
> On Mon, Oct 10, 2016 at 1:41 PM, Mark Abraham <mark.j.abra...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Yeah, but that run is very likely
>>
>> a) useless because you're just running two copies of the same simulation
>> because you're not running MPI-enabled mdrun
>> b) and even if not, less efficient than the thread-MPI version
>>
>> mdrun -v -nt 2
>>
>> c) and even if not, likely slightly less efficient than the real-MPI
>> version
>>
>> mpirun -np 1 mdrun_mpi -v -ntomp 2
>>
>> top isn't necessarily reporting anything relevant. A CPU can be nominally
>> idle while waiting for communication, but what does top think about that?
>>
>> Mark
>>
>> On Mon, Oct 10, 2016 at 11:47 AM Mahmood Naderan <mahmood...@gmail.com>
>> wrote:
>>
>> > OK. I understood the  documents.
>> > Thing that I want is to see two processes (for example) each consumes
>> 100%
>> > cpu. The command for that is
>> >
>> > mpirun -np 2 mdrun -v -nt 1
>> >
>> > ​Thanks Mark.​
>> >
>> > Regards,
>> > Mahmood
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-requ...@gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Multithread run issues

2016-10-10 Thread Mahmood Naderan
OK. I understood the  documents.
Thing that I want is to see two processes (for example) each consumes 100%
cpu. The command for that is

mpirun -np 2 mdrun -v -nt 1

​Thanks Mark.​

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Multithread run issues

2016-10-10 Thread Mahmood Naderan
>mpirun -np 1 mdrun_mpi -v -ntomp 2

Agree with that.

>This is not a problem to solve by running applications differently. You
>will have users running jobs with single ranks/processes that use threading
>of various kinds to fill the cores. That's a feature, not a bug. Either
>configure PBS to cope with decades-old technology, or don't worry about it.

I found this document
​(
https://wiki.anl.gov/cnm/HPC/Submitting_and_Managing_Jobs/Advanced_node_selection#Multithreading_using_OpenMP
)
​That is what I want to be sure that number of threads and cores used for
gromacs fits to the PBS stats.

Instead of the variable, I wrote



#PBS -l nodes=1:ppn=2​
export OMP_NUM_THREADS=2
mpirun  mdrun -v

So that will use two cores with 4 threads totally and PBS should report 4
processors are occupied.
However, gromacs failed with the following error




Regards,
Mahmood



On Mon, Oct 10, 2016 at 1:41 PM, Mark Abraham <mark.j.abra...@gmail.com>
wrote:

> Hi,
>
> Yeah, but that run is very likely
>
> a) useless because you're just running two copies of the same simulation
> because you're not running MPI-enabled mdrun
> b) and even if not, less efficient than the thread-MPI version
>
> mdrun -v -nt 2
>
> c) and even if not, likely slightly less efficient than the real-MPI
> version
>
> mpirun -np 1 mdrun_mpi -v -ntomp 2
>
> top isn't necessarily reporting anything relevant. A CPU can be nominally
> idle while waiting for communication, but what does top think about that?
>
> Mark
>
> On Mon, Oct 10, 2016 at 11:47 AM Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > OK. I understood the  documents.
> > Thing that I want is to see two processes (for example) each consumes
> 100%
> > cpu. The command for that is
> >
> > mpirun -np 2 mdrun -v -nt 1
> >
> > ​Thanks Mark.​
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs build options

2016-10-09 Thread Mahmood Naderan
Hi mark,
Thank you very much. In fact the following commands did the job


$ cmake ..  -DCMAKE_C_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpicc
-DCMAKE_CXX_COMPILER=/share/apps/computer/openmpi-2.0.1/bin/mpic++
-DCMAKE_PREFIX_PATH=/share/apps/chemistry/gromacs-5.1
-DGMX_BUILD_OWN_FFTW=ON -DBUILD_SHARED_LIBS=off -DGMX_BUILD_MDRUN_ONLY=on
$ make
$ sudo make install


However, after "sudo make install", the bin/mdrun goes in to
/usr/local/gromacs and not what I specified in the PREFIX_PATH.


I am not a user of gromacs. But as a cluster admin, I installed gromacs for
the users.

Thanks a lot.


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
Hi,
A PBS script for a gromacs job has been submitted with the following
content:

#!/bin/bash
#PBS -V
#PBS -q default
#PBS -j oe
#PBS -l nodes=2:ppn=10
#PBS -N LPN
#PBS -o /home/dayer/LPN/mdout.out
cd $PBS_O_WORKDIR
mpirun gromacs-5.1/bin/mdrun_mpi -v


As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that the cpu
utilization is not good enough!


[root@compute-0-1 ~]# ps aux | grep mdrun_mpi
dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
gromacs-5.1/bin/mdrun_mpi -v
dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
gromacs-5.1/bin/mdrun_mpi -v
dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
gromacs-5.1/bin/mdrun_mpi -v
dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
gromacs-5.1/bin/mdrun_mpi -v
dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
gromacs-5.1/bin/mdrun_mpi -v
dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
gromacs-5.1/bin/mdrun_mpi -v
dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
gromacs-5.1/bin/mdrun_mpi -v
dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
gromacs-5.1/bin/mdrun_mpi -v
dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
gromacs-5.1/bin/mdrun_mpi -v
dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
gromacs-5.1/bin/mdrun_mpi -v



[root@compute-0-3 ~]# ps aux | grep mdrun_mpi
dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03 mpirun
gromacs-5.1/bin/mdrun_mpi -v
dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
gromacs-5.1/bin/mdrun_mpi -v
dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
gromacs-5.1/bin/mdrun_mpi -v
dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
gromacs-5.1/bin/mdrun_mpi -v
dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
gromacs-5.1/bin/mdrun_mpi -v
dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
gromacs-5.1/bin/mdrun_mpi -v
dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
gromacs-5.1/bin/mdrun_mpi -v
dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
gromacs-5.1/bin/mdrun_mpi -v
dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
gromacs-5.1/bin/mdrun_mpi -v
dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
gromacs-5.1/bin/mdrun_mpi -v
dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
gromacs-5.1/bin/mdrun_mpi -v



Please note that the third column is the cpu utilization.
Gromacs is a compute intensive application, so there is little IO or
something else for that.


Please also note that in compute-0-3 the first process is "mpirun
gromacs-5.1" while the others are only "gromacs-5.1"


Any idea is welcomed.

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
Well that is provided by nodes=2:ppn=10 in the PBS script.

Regards,
Mahmood



On Sun, Oct 16, 2016 at 9:26 PM, Parvez Mh <parvezm...@gmail.com> wrote:

> Hi,
>
> Where is -np option in mpirun ?
>
> --Masrul
>
> On Sun, Oct 16, 2016 at 12:45 PM, Mahmood Naderan <mahmood...@gmail.com>
> wrote:
>
> > Hi,
> > A PBS script for a gromacs job has been submitted with the following
> > content:
> >
> > #!/bin/bash
> > #PBS -V
> > #PBS -q default
> > #PBS -j oe
> > #PBS -l nodes=2:ppn=10
> > #PBS -N LPN
> > #PBS -o /home/dayer/LPN/mdout.out
> > cd $PBS_O_WORKDIR
> > mpirun gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> > As I ssh'ed to the nodes and saw mdrun_mpi process, I noticed that the
> cpu
> > utilization is not good enough!
> >
> >
> > [root@compute-0-1 ~]# ps aux | grep mdrun_mpi
> > dayer 7552 64.1  0.0 199224 21300 ?RNl  Oct15 1213:39
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7553 56.8  0.0 201524 23044 ?RNl  Oct15 1074:47
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7554 64.1  0.0 201112 22364 ?RNl  Oct15 1213:25
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7555 56.5  0.0 198336 20408 ?RNl  Oct15 1070:17
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7556 64.3  0.0 225796 48436 ?RNl  Oct15 1217:35
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7557 56.1  0.0 198444 20404 ?RNl  Oct15 1062:26
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7558 63.4  0.0 198996 20848 ?RNl  Oct15 1199:05
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7562 56.2  0.0 197912 19736 ?RNl  Oct15 1062:57
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7565 63.1  0.0 197008 19208 ?RNl  Oct15 1194:51
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 7569 56.7  0.0 227904 50584 ?RNl  Oct15 1072:33
> > gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> >
> > [root@compute-0-3 ~]# ps aux | grep mdrun_mpi
> > dayer 1735  0.0  0.0 299192  4692 ?Sl   Oct15   0:03 mpirun
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1740  9.5  0.0 209692 29224 ?RNl  Oct15 180:09
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1741  9.6  0.0 200948 22784 ?RNl  Oct15 183:21
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1742  9.3  0.0 200256 21980 ?RNl  Oct15 177:28
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1743  9.5  0.0 197672 19100 ?RNl  Oct15 180:01
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1744  9.6  0.0 228208 50920 ?RNl  Oct15 183:07
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1746  9.3  0.0 199144 20588 ?RNl  Oct15 176:24
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1749  9.5  0.0 201496 23156 ?RNl  Oct15 180:25
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1751  9.1  0.0 200916 22884 ?RNl  Oct15 173:13
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1755  9.3  0.0 198744 20616 ?RNl  Oct15 176:49
> > gromacs-5.1/bin/mdrun_mpi -v
> > dayer 1758  9.2  0.0 226792 49460 ?RNl  Oct15 174:12
> > gromacs-5.1/bin/mdrun_mpi -v
> >
> >
> >
> > Please note that the third column is the cpu utilization.
> > Gromacs is a compute intensive application, so there is little IO or
> > something else for that.
> >
> >
> > Please also note that in compute-0-3 the first process is "mpirun
> > gromacs-5.1" while the others are only "gromacs-5.1"
> >
> >
> > Any idea is welcomed.
> >
> > Regards,
> > Mahmood
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
​>Where is -np option in mpirun ?

Please see this

https://mail-archive.com/users@lists.open-mpi.org/msg30043.html



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Low cpu utilization

2016-10-16 Thread Mahmood Naderan
Hi mark,
There is a question here... What is the difference between

mpirun gmx_mpi mdrun
And
mpirun mdrun_mpi

?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Low cpu utilization

2016-10-17 Thread Mahmood Naderan
Problem is that I can not find out if gromacs (or MPI) is using the
resources correctly. Is there any idea to see if there is any bottleneck
for such low utilization?

Regards,
Mahmood



On Mon, Oct 17, 2016 at 11:30 AM, Mahmood Naderan <mahmood...@gmail.com>
wrote:

> it is interesting for me that I specified Verlet, but the log warns about
> group.
>
> mahmood@cluster:LPN$ grep -r cut-off .
> ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
> based cut-offs)
> ./mdout.mdp:; nblist cut-off
> ./mdout.mdp:; long-range cut-off for switched potentials
> ./mdout.mdp:; cut-off lengths
> ./mdout.mdp:; Extension of the potential lookup tables beyond the cut-off
> mahmood@cluster:LPN$ grep -r Verlet .
> ./mdout.mdp:; cut-off scheme (group: using charge groups, Verlet: particle
> based cut-offs)
> ./mdout.mdp:cutoff-scheme= Verlet
> ./mdout.mdp:; Allowed energy drift due to the Verlet buffer in kJ/mol/ps
> per atom,
> ./mdout.mdp:coulomb-modifier = Potential-shift-Verlet
> ./mdout.mdp:vdw-modifier = Potential-shift-Verlet
>
>
>
>
> Regards,
> Mahmood
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] tpr generation aborted

2018-02-24 Thread Mahmood Naderan
Hi,
Following the Lusozume tutorial, I face an error at the step of generating 
ions.tpr which says too many warnings.

$ gmx grompp -f ions.mdp -c 1AKI_solv.gro -p topol.top -o ions.tpr
NOTE 1 [file ions.mdp]:
  With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
  that with the Verlet scheme, nstlist has no effect on the accuracy of
  your simulation.

Setting the LD random seed to -723452053
Generated 330891 of the 330891 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 330891 of the 330891 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'Protein_chain_A'
Excluding 2 bonded neighbours molecule type 'SOL'

NOTE 2 [file topol.top, line 18409]:
  System has non-zero total charge: 8.00
  Total charge should normally be an integer. See
  http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
  for discussion on how close it should be to an integer.
  



WARNING 1 [file topol.top, line 18409]:
  You are using Ewald electrostatics in a system with net charge. This can
  lead to severe artifacts, such as ions moving into regions with low
  dielectric, due to the uniform background charge. We suggest to
  neutralize your system with counter ions, possibly in combination with a
  physiological salt concentration.


 PLEASE READ AND CITE THE FOLLOWING REFERENCE 
J. S. Hub, B. L. de Groot, H. Grubmueller, G. Groenhof
Quantifying Artifacts in Ewald Simulations of Inhomogeneous Systems with a Net
Charge
J. Chem. Theory Comput. 10 (2014) pp. 381-393
  --- Thank You ---  

Removing all charge groups because cutoff-scheme=Verlet
Analysing residue names:
There are:   129    Protein residues
There are: 10644  Water residues
Analysing Protein...
Number of degrees of freedom in T-Coupling group rest is 69741.00
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 60x60x60, spacing 0.117 0.117 0.117
Estimate for the relative computational load of the PME mesh part: 0.22
This run will generate roughly 3 Mb of data

There were 2 notes

There was 1 warning

---
Program: gmx grompp, version 2018
Source file: src/gromacs/gmxpreprocess/grompp.cpp (line 2406)

Fatal error:
Too many warnings (1).
If you are sure all warnings are harmless, use the -maxwarn option.






Is it safe to use -maxwarn option?


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Minimum compute compatibility

2018-02-23 Thread Mahmood Naderan
I wrongly downloaded the v5.0. It seems that 2018 version is better! Such error 
is now solved.


Regards,
Mahmood 

On Friday, February 23, 2018, 4:10:15 PM GMT+3:30, Mahmood Naderan 
<nt_mahm...@yahoo.com> wrote:  
 
 Hi,While I set -DGMX_GPU=on for a M2000 card, the make returned an error which 
says compute_20 is not supported. So, where in the options I can drop the 
compute_20 capability?

Regards,
Mahmood  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Minimum compute compatibility

2018-02-23 Thread Mahmood Naderan
Hi,While I set -DGMX_GPU=on for a M2000 card, the make returned an error which 
says compute_20 is not supported. So, where in the options I can drop the 
compute_20 capability?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] cpu/gpu utilization

2018-02-26 Thread Mahmood Naderan
Hi,

While the cut-off is set to Verlet and I run "gmx mdrun -nb gpu -deffnm 
input_md", I see that 9 threads out of total logical 16 threads are running on 
the cpu while the gpu is utilized. The gmx also says


No option -multi
Using 1 MPI thread
Using 16 OpenMP threads 


I want to know, why 9 threads are running? 
Is that normal? Ryzen 1800x has 8 physical cores. 


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mpirun and gmx_mpi

2018-07-27 Thread Mahmood Naderan
Szillard,
So, the following commands have the same meaning (8 mpi threads each 2 openmp 
threads as gromacs says) on an 8 core (16 threads) Ryzen CPU with one M2000.



mpirun -np 8 gmx_mpi mdrun -v -deffnm nvt
mpirun -np 8 gmx_mpi mdrun -v -ntomp 2 -deffnm nvt



Both have 8 GPU tasks. However, the former takes 83 seconds while the latter 
takes 99 seconds. I tried multiple times.

Any thought?


Regards,
Mahmood 




On Wednesday, July 25, 2018, 5:51:20 PM GMT+4:30, Szilárd Páll 
 wrote:  
 
 
Though not wrong, that's a typo, the default binary suffix of MPI builds is 
"_mpi", but it can be 
changed:http://manual.gromacs.org/documentation/2018/install-guide/index.html#changing-the-names-of-gromacs-binaries-and-libraries
 
--Szilárd
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] gromacs with mps

2018-07-27 Thread Mahmood Naderan
Hi
Has anyone run gmx_mpi with MPS? Even with small input files (which are working 
fine when MPS is turned off), I get out of memory error from the GPU device.
Don't know if there is a bug inside cuda or gromacs. I see some other related 
topics for other programs. So, it sound like a cuda problem.
If you have worked with MPS, please let me know.

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2018.2 mdrun GPU Assertion failed: Condition: cudaSuccess == cudaPeekAtLastError()

2018-08-10 Thread Mahmood Naderan
>Assertion failed:
>Condition: cudaSuccess == cudaPeekAtLastError()
>We promise to return with clean CUDA state!


Hi,
I had some runtime problems with cuda 9.1 which were solved by 9.2!So, I 
suggest you to first update 9.0 to 9.2 and then spend time in case you get any 
error.

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mpirun and gmx_mpi

2018-07-25 Thread Mahmood Naderan
It is stated that 


mpirun -np 4 gmx mdrun -ntomp 6 -nb gpu -gputasks 00

Starts gmx mdrun on a machine with two nodes, usingfour total ranks, each rank 
with six OpenMP threads,and both ranks on a node sharing GPU with ID 0.



Questions are:
1- Why gmx_mpi is not used?
2- How two nodes were specified in the command line?
3- Total four ranks and two ranks on GPU?!
4- Is that using Nvidia MPS? Due to the single GPU device which is shared 
between ranks.


Regards,
Mahmood 

On Wednesday, July 25, 2018, 1:05:10 AM GMT+4:30, Szilárd Páll 
 wrote:  

That choice depends on whether you want to run across multiple compute nodes; 
the former can not while the latter, as it is (by default) indicates that it's 
using an MPI library, can run across nodes. Both can be used on GPUs as long as 
the programs were built with GPU support. 

I recommend that you check the documentation:
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
--
Szilárd
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] mpirun and gmx_mpi

2018-07-24 Thread Mahmood Naderan
No idea? Those who use GPU, which command do they use? gmx or gmx_mpi?

Regards,
Mahmood 




On Wednesday, July 11, 2018, 11:46:06 AM GMT+4:30, Mahmood Naderan 
 wrote:  
 
 Hi,Although I have read the manual and I have wrote programs with mpi, the 
gromacs use of mpi is confusing.
Is it mandatory to use mpirun before gmx_mpi or not?
Can someone shed a light on that?

Regards,
Mahmood  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Optimal pme grid

2018-08-31 Thread Mahmood Naderan
Hi
It seems that changing the number of ntmpi and ntomp affects the number of 
steps that takes to calculate the optimal pme grid. Is that correct?

Please see the following output

gmx mdrun -nb gpu -ntmpi 1 -ntomp 16 -v -deffnm nvt
Using 1 MPI thread
Using 16 OpenMP threads 
step 2400: timed with pme grid 60 80 60, coulomb cutoff 1.037: 5708.7 M-cycles
step 2600: timed with pme grid 64 80 60, coulomb cutoff 1.000: 5382.6 M-cycles
  optimal pme grid 64 80 60, coulomb cutoff 1.000
step 3900, remaining wall clock time: 1 s  



gmx mdrun -nb gpu -ntmpi 16 -ntomp 1 -v -deffnm nvtUsing 16 MPI threads
Using 1 OpenMP thread per tMPI thread
step 3800: timed with pme grid 56 72 56, coulomb cutoff 1.111: 21060.1 M-cycles
step 4000: timed with pme grid 60 72 56, coulomb cutoff 1.075: 21132.8 M-cycles

Writing final coordinates.



I have intentionally limit the number of steps to 4000. As you can see, in the 
second run, the optimal value has not been reached.






Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu threads in a gpu run

2018-07-09 Thread Mahmood Naderan
No idea? It seems to be odd. At the beginning of run, I see

NOTE: GROMACS was configured without NVML support hence it can not exploit
  application clocks of the detected Quadro M2000 GPU to improve 
performance.
  Recompile with the NVML library (compatible with the driver used) or set 
application clocks manually.


Is the behavior I see, related to this note? I doubt, but if someone has a 
comment, I appreciate that.


Regards,
Mahmood 

On Monday, July 9, 2018, 3:13:20 PM GMT+4:30, Mahmood Naderan 
 wrote:  
 
 Hi,
When I run "-nt 16 -nb cpu", I see nearly 1600% cpu utilization. However, when 
I run "-nt 16 -nb gpu", I see about 600% cpu utilization. Is there any reason 
about that? I want to know with the cpu threads in a gpu run is controllable.


Regards,
Mahmood  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Any configuration for enabling thread-MPI?

2018-07-10 Thread Mahmood Naderan
Hi,
The manual says:
GROMACS can run in parallel on multiple cores of a singleworkstation using its 
built-in thread-MPI. No user action is requiredin order to enable this.


However, that may not be correct because I get this error
Command line:
  gmx_mpi mdrun -v -ntmpi 2 -ntomp 4 -nb gpu -deffnm nvt4


Back Off! I just backed up nvt4.log to ./#nvt4.log.17#
Reading file nvt4.tpr, VERSION 2018 (single precision)

---
Program: gmx mdrun, version 2018
Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 680)

Fatal error:
Setting the number of thread-MPI ranks is only supported with thread-MPI and
GROMACS was compiled without thread-MPI




The configuration command I used is
cmake .. -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_MPI=on

Any thought?



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] mpirun and gmx_mpi

2018-07-11 Thread Mahmood Naderan
Hi,Although I have read the manual and I have wrote programs with mpi, the 
gromacs use of mpi is confusing.
Is it mandatory to use mpirun before gmx_mpi or not?
Can someone shed a light on that?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] cpu threads in a gpu run

2018-07-09 Thread Mahmood Naderan
Hi,
When I run "-nt 16 -nb cpu", I see nearly 1600% cpu utilization. However, when 
I run "-nt 16 -nb gpu", I see about 600% cpu utilization. Is there any reason 
about that? I want to know with the cpu threads in a gpu run is controllable.


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] computation/memory modifications

2018-03-09 Thread Mahmood Naderan
Hi,
I want to do some tests on the lysozyme tutorial. Assume that the tutorial with 
the default parameters which is run for 10ps, takes X seconds wall clock time. 
If I want to increase the wall clock time, I can simply run for 100ps. However, 
that is not what I want.
I want to increase the amount of computation for every step, e.g. 1ps. 
Therefore I want another run for 10ps which takes Y seconds wall clock time 
where Y>X. I also want to increase the memory usage for each step compared to 
the default values in the tutorial.
May I know which parameters are chosen for those purposes?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>Additionally, you still have not provided the *mdrun log file* I requested. 
>top output is not what I asked for.
See the attached file.


Regards,
Mahmood




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>The list does not accept attachments, so please use a file sharing or content 
>sharing website so >everyone can see your data and has the context.

I uploaded here

https://pastebin.com/RCkkFXPx



Regards,
Mahmood

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Backup files

2018-02-28 Thread Mahmood Naderan
Well I searched for "gromacs backoff" first and then while googling I saw [1] 
and an environment variable GMX_MAXBACKUP [2].

[1] http://cupnet.net/clean-gromacs-backups
[2] Environment Variables — GROMACS 5.1 documentation


Therefore I asked for a simpler method ;) 
 Anyway...

Regards,
Mahmood 

On Wednesday, February 28, 2018, 8:15:34 PM GMT+3:30, Mark Abraham 
<mark.j.abra...@gmail.com> wrote:  
 
 There's an even better solution in the GROMACS documentation, which e.g. 
googling "disable gromacs backups" will find ;-)
Mark
On Wed, Feb 28, 2018 at 4:50 PM András Ferenc WACHA <wacha.and...@ttk.mta.hu> 
wrote:

Dear Mahmood,

as far as I know, each command supports the "-nobackup" command line
switch...

Best regards,

Andras


On 02/28/2018 04:46 PM, Mahmood Naderan wrote:
> Hi,How can I disable the backup feature? I mean backed up files which start 
> and end with # character.
>
> Regards,
> Mahmood

--
András Ferenc Wacha, PhD
research fellow, CREDO instrument responsible

Biological Nanochemistry Research Group

Institute of Materials and Environmental Chemistry
Research Centre for Natural Sciences
Hungarian Academy of Sciences (RCNS HAS)
Magyar tudósok körútja 2.
H-1117 Budapest, Hungary
Phone: +36-1-382-6427
Web: http://bionano.ttk.mta.hu,
CREDO SAXS instrument: http://credo.ttk.mta.hu


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
>(try the other parallel modes)

Do you mean OpenMP and MPI?


>- as noted above try offloading only the nonbondeds (or possibly the hybrid 
>PME mode -pmefft cpu)

May I know how? Which part of the documentation says about that?


Regards,
Mahmood

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
>- as noted above try offloading only the nonbondeds (or possibly the hybrid 
>PME mode -pmefft cpu)

So, with "-pmefft cpu", I don't see any good impact!See the log at 
https://pastebin.com/RTYaKSne

I will use other options to see the effect.


Regards,
Mahmood 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
>Again, first and foremost, try running PME on the CPU, your 8-core Ryzen will 
>be plenty fast for that.


Since I am a computer guy and not a chemist, the question may be noob!
What do you mean exactly by running pme on cpu?

You mean "-nb cpu"? or you mean setting cut-off to Group instead of Verlet 
(latter is used for GPU only)?


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] computation/memory modifications

2018-03-13 Thread Mahmood Naderan
No idea? Any feedback is appreciated.


Regards,
Mahmood 

On Friday, March 9, 2018, 9:47:33 PM GMT+3:30, Mahmood Naderan 
<nt_mahm...@yahoo.com> wrote:  
 
 Hi,
I want to do some tests on the lysozyme tutorial. Assume that the tutorial with 
the default parameters which is run for 10ps, takes X seconds wall clock time. 
If I want to increase the wall clock time, I can simply run for 100ps. However, 
that is not what I want.
I want to increase the amount of computation for every step, e.g. 1ps. 
Therefore I want another run for 10ps which takes Y seconds wall clock time 
where Y>X. I also want to increase the memory usage for each step compared to 
the default values in the tutorial.
May I know which parameters are chosen for those purposes?

Regards,
Mahmood  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] computation/memory modifications

2018-03-13 Thread Mahmood Naderan
Sorry about that. I didn't see that in the inbox. Excuse me...
What am I thinking about are some notes in the tutorial. For example


>The above command centers the protein in the box (-c), and places it at least 
>1.0 nm from the box >edge (-d 1.0). The box type is defined as a cube (-bt 
>cubic)

So, can I make a bigger problem by setting that to 4.0 nm? What about other box 
types?


>[ molecules ]
>; Compound  #mols
>Protein_A 1
>SOL   10824
>CL    8

I see 8 green points in the figure in that page. Can I increase that to put 
more molecules in the cube?
What are the blue dots in the figure? Are they atoms? If I want to increase the 
number of atoms, should I use another pdb file and run pdb2gmx again?


>; Name   nrexcl
>Protein_A    3

Can I blindly increase that 3?

Can I use another pdb file (other than 1AKI) and follow the same procedure? 

Generally, the type of my questions are like those.




Regards,
Mahmood







On Tuesday, March 13, 2018, 2:48:35 PM GMT+3:30, Justin Lemkul 
<jalem...@vt.edu> wrote: 





On 3/13/18 2:49 AM, Mahmood Naderan wrote:
> No idea? Any feedback is appreciated.

You likely missed my reply:

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2018-March/119068.html

-Justin


-- 
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html


==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Backup files

2018-02-28 Thread Mahmood Naderan
Hi,How can I disable the backup feature? I mean backed up files which start and 
end with # character.

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Backup files

2018-02-28 Thread Mahmood Naderan
Yes you are right. Thank you very much.


Regards,
Mahmood 

On Wednesday, February 28, 2018, 7:19:55 PM GMT+3:30, András Ferenc WACHA 
 wrote:  
 
 Dear Mahmood,

as far as I know, each command supports the "-nobackup" command line
switch...

Best regards,

Andras


  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
I forgot to say that gromacs reports
No option -multi
Using 1 MPI thread
Using 16 OpenMP threads 

1 GPU auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
  PP:0,PME:0

NOTE: GROMACS was configured without NVML support hence it can not exploit
  application clocks of the detected Quadro M2000 GPU to improve 
performance.






Regards,
Mahmood 

On Wednesday, February 28, 2018, 7:15:13 PM GMT+3:30, Mahmood Naderan 
<nt_mahm...@yahoo.com> wrote:  
 
 By runing
gmx mdrun -nb gpu -deffnm md_0_1

I see the following outputs

$ top -b  | head -n 10
top - 19:14:10 up 7 min,  1 user,  load average: 4.54, 1.40, 0.54
Tasks: 344 total,   1 running, 343 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.1 us,  0.5 sy,  0.0 ni, 91.9 id,  0.4 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16438496 total, 13462876 free,  1968196 used,  1007424 buff/cache
KiB Swap: 31250428 total, 31250428 free,    0 used. 14054796 avail Mem 

  PID USER  PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ COMMAND
 3604 mahmood   20   0 30.519g 525812 128788 S 918.8  3.2   6:58.38 gmx
 1180 root  20   0  324688  69384  49712 S   6.2  0.4   0:14.41 Xorg
 1450 mahmood   20   0  210228   7856   7192 S   6.2  0.0   0:00.17 ibus-engin+



$ nvidia-smi 
Wed Feb 28 19:14:35 2018   
+-+
| NVIDIA-SMI 384.81 Driver Version: 384.81    |
|---+--+--+
| GPU  Name    Persistence-M| Bus-Id    Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  Quadro M2000    Off  | :23:00.0  On |  N/A |
| 65%   64C    P0    58W /  75W |    292MiB /  4035MiB | 93%  Default |
+---+--+--+
   
+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|    0  1180  G   /usr/lib/xorg/Xorg   141MiB |
|    0  1651  G   compiz    46MiB |
|    0  3604  C   gmx   90MiB |
+-+





Any idea?


Regards,
Mahmood 

On Monday, February 26, 2018, 10:52:40 PM GMT+3:30, Szilárd Páll 
<pall.szil...@gmail.com> wrote:  
 
 Hi,
Please provide details, e.g. the full log so we know what version, on what 
hardware, settings etc. you're running.


--
Szilárd

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-02-28 Thread Mahmood Naderan
By runing
gmx mdrun -nb gpu -deffnm md_0_1

I see the following outputs

$ top -b  | head -n 10
top - 19:14:10 up 7 min,  1 user,  load average: 4.54, 1.40, 0.54
Tasks: 344 total,   1 running, 343 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.1 us,  0.5 sy,  0.0 ni, 91.9 id,  0.4 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16438496 total, 13462876 free,  1968196 used,  1007424 buff/cache
KiB Swap: 31250428 total, 31250428 free,    0 used. 14054796 avail Mem 

  PID USER  PR  NI    VIRT    RES    SHR S  %CPU %MEM TIME+ COMMAND
 3604 mahmood   20   0 30.519g 525812 128788 S 918.8  3.2   6:58.38 gmx
 1180 root  20   0  324688  69384  49712 S   6.2  0.4   0:14.41 Xorg
 1450 mahmood   20   0  210228   7856   7192 S   6.2  0.0   0:00.17 ibus-engin+



$ nvidia-smi 
Wed Feb 28 19:14:35 2018   
+-+
| NVIDIA-SMI 384.81 Driver Version: 384.81    |
|---+--+--+
| GPU  Name    Persistence-M| Bus-Id    Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  Quadro M2000    Off  | :23:00.0  On |  N/A |
| 65%   64C    P0    58W /  75W |    292MiB /  4035MiB | 93%  Default |
+---+--+--+
   
+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|    0  1180  G   /usr/lib/xorg/Xorg   141MiB |
|    0  1651  G   compiz    46MiB |
|    0  3604  C   gmx   90MiB |
+-+





Any idea?


Regards,
Mahmood 

On Monday, February 26, 2018, 10:52:40 PM GMT+3:30, Szilárd Páll 
 wrote:  
 
 Hi,
Please provide details, e.g. the full log so we know what version, on what 
hardware, settings etc. you're running.


--
Szilárd
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Command is "gmx mdrun -nobackup -pme cpu -nb gpu -deffnm md_0_1" and the log 
says

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 16 OpenMP threads

 Computing:  Num   Num  Call    Wall time Giga-Cycles
 Ranks Threads  Count  (s) total sum    %
-
 Neighbor search    1   16    501   0.972 55.965   0.8
 Launch GPU ops.    1   16  50001   2.141    123.301   1.7
 Force  1   16  50001   4.019    231.486   3.1
 PME mesh   1   16  50001  40.695   2344.171  31.8
 Wait GPU NB local  1   16  50001  60.155   3465.079  47.0
 NB X/F buffer ops. 1   16  99501   7.342    422.902   5.7
 Write traj.    1   16 11   0.246 14.184   0.2
 Update 1   16  50001   3.480    200.461   2.7
 Constraints    1   16  50001   5.831    335.878   4.6
 Rest   3.159    181.963   2.5
-
 Total    128.039   7375.390 100.0
-
 Breakdown of PME mesh computation
-
 PME spread 1   16  50001  17.086    984.209  13.3
 PME gather 1   16  50001  12.534    722.007   9.8
 PME 3D-FFT 1   16 12   9.956    573.512   7.8
 PME solve Elec 1   16  50001   0.779 44.859   0.6
-

   Core t (s)   Wall t (s)    (%)
   Time: 2048.617  128.039 1600.0
 (ns/day)    (hour/ns)
Performance:   67.481    0.356






While the command is "", I see that the gpu is utilized about 10% and the log 
file says:

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 16 OpenMP threads

 Computing:  Num   Num  Call    Wall time Giga-Cycles
 Ranks Threads  Count  (s) total sum    %
-
 Neighbor search    1   16   1251   6.912    398.128   2.3
 Force  1   16  50001 210.689  12135.653  70.4
 PME mesh   1   16  50001  46.869   2699.656  15.7
 NB X/F buffer ops. 1   16  98751  22.315   1285.360   7.5
 Write traj.    1   16 11   0.216 12.447   0.1
 Update 1   16  50001   4.382    252.386   1.5
 Constraints    1   16  50001   6.035    347.601   2.0
 Rest   1.666 95.933   0.6
-
 Total    299.083  17227.165 100.0
-
 Breakdown of PME mesh computation
-
 PME spread 1   16  50001  21.505   1238.693   7.2
 PME gather 1   16  50001  12.089    696.333   4.0
 PME 3D-FFT 1   16 12  11.627    669.705   3.9
 PME solve Elec 1   16  50001   0.965 55.598   0.3
-

   Core t (s)   Wall t (s)    (%)
   Time: 4785.326  299.083 1600.0
 (ns/day)    (hour/ns)
Performance:   28.889    0.831




Using GPU is still better than using CPU alone. However, I see that while GPU 
is utilized, the CPU is also busy. So, I was thinking that the source code uses 
cudaDeviceSynchronize() where the CPU enters a busy loop.

Regards,
Mahmood 

On Friday, March 2, 2018, 11:37:11 AM GMT+3:30, Magnus Lundborg 
 wrote:  
 
 Have you tried the mdrun options:

-pme cpu -nb gpu
-pme cpu -nb cpu

Cheers,

Magnus

  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-01 Thread Mahmood Naderan
If you mean [1], then yes I read that and that recommends to use Verlet for the 
new algorithm depicted in  figures. At least that is my understanding about 
offloading. If I read the wrong document or you mean there is also some other 
options, please let me know.

[1] http://www.gromacs.org/GPU_acceleration





Regards,
Mahmood 

On Thursday, March 1, 2018, 6:35:46 PM GMT+3:30, Szilárd Páll 
 wrote:  
 
 Have you read the "Types of GPU tasks" section of the user guide?

--
Szilárd
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] cpu/gpu utilization

2018-03-02 Thread Mahmood Naderan
Sorry for the confusion. My fault...
I saw my previous post and found that I missed something. In fact, I couldn't 
run "-pme gpu".

So, once again, I ran all the commands and uploaded the log files


gmx mdrun -nobackup -nb cpu -pme cpu -deffnm md_0_1
https://pastebin.com/RNT4XJy8


gmx mdrun -nobackup -nb cpu -pme gpu -deffnm md_0_1
https://pastebin.com/7BQn8R7g
This run shows an error on the screen which is not shown in the log file. So 
please also see https://pastebin.com/KHg6FkBz



gmx mdrun -nobackup -nb gpu -pme cpu -deffnm md_0_1
https://pastebin.com/YXYj23tB



gmx mdrun -nobackup -nb gpu -pme gpu -deffnm md_0_1
https://pastebin.com/P3X4mE5y





From the results, it seems that running the pme on the cpu is better than gpu. 
The fastest command here is -nb gpu -pme cpu


Still I have the question that while GPU is utilized, the CPU is also busy. So, 
I was thinking that the source code uses cudaDeviceSynchronize() where the CPU 
enters a busy loop.



Regards,
Mahmood






On Friday, March 2, 2018, 3:24:41 PM GMT+3:30, Szilárd Páll 
 wrote: 





Once again, full log files, please, not partial cut-and-paste, please.

Also, you misread something because your previous logs show:
-nb cpu -pme gpu: 56.4 ns/day
-nb cpu -pme gpu -pmefft cpu 64.6 ns/day
-nb cpu -pme cpu 67.5 ns/day

So both mixed mode PME and PME on CPU are faster, the latter slightly faster 
than the former.

This is about as much as you can do, I think. Your GPU is just too slow to get 
more performance out of it and the runs are GPU-bound. You might be able to get 
a bit more performance with some tweaks (compile mdrun with AVX2_256, use a 
newer fftw, use a newer gcc), but expect marginal gains.

Cheers,

--
Szilárd


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] About GMX_PRINT_DEBUG_LINES

2019-01-17 Thread Mahmood Naderan
Hi,
I set GMX_PRINT_DEBUG_LINE before the mdrun command, however, I don't see any 
debug message

$ GMX_PRINT_DEBUG_LINES=1
$ gmx mdrun -nb gpu -ntmpi 8 -ntomp 1 -v -deffnm nvt
...NOTE: DLB can now turn on, when beneficialstep 1100, will finish Fri Jan 18 
19:24:07 2019imb F  8% 
step 1200 Turning on dynamic load balancing, because the performance loss due 
to load imbalance is 5.0 %.
step 7900, will finish Fri Jan 18 17:42:40 2019vol 0.74! imb F  4% 




Without setting that variable, those messages are still printed!
Any thought?


Regards,Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] About fprintf and debugging

2019-01-28 Thread Mahmood Naderan
Hi
Where should I set the flag in order to see the fprintf statements like
    if (debug)
    {
    fprintf(debug, "PME: number of ranks = %d, rank = %d\n",
    cr->nnodes, cr->nodeid);

Any idea?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Cannot run short-ranged nonbonded interactions on a GPU

2019-09-08 Thread Mahmood Naderan
Hi
With the following config command
cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=`pwd`/../single 
-DGMX_BUILD_OWN_FFTW=ON
I get the following error for "gmx mdrun -nb gpu -v -deffnm inp_nvp"
Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.



deviceQuery command shows that GPU is detected.
$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1080 Ti"
...


With the same input, I haven't seen that error before. Did I miss something? 

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] cmake fails with custom gcc version

2019-11-02 Thread Mahmood Naderan
Hi,
Although I have specified a custom CC and CXX path, the cmake command fails 
with an error.

$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single61 
-DGMX_BUILD_OWN_FFTW=ON -DGMX_CUDA_TARGET_SM=61 
-DCMAKE_C_COMPILER=/home/mahmood/tools/gcc-6.1.0/bin/gcc 
-DCMAKE_CXX_COMPILER=/home/mahmood/tools/gcc-6.1.0/bin/g++-- The C compiler 
identification is unknown
-- The CXX compiler identification is unknown
-- Check for working C compiler: /home/mahmood/tools/gcc-6.1.0/bin/gcc-- Check 
for working C compiler: /home/mahmood/tools/gcc-6.1.0/bin/gcc -- brokenCMake 
Error at 
/home/mahmood/tools/cmake-3.15.4/share/cmake-3.15/Modules/CMakeTestCCompiler.cmake:60
 (message):  The C compiler

    "/home/mahmood/tools/gcc-6.1.0/bin/gcc"
  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: 
/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp
    Run Build Command(s):/usr/bin/gmake cmTC_68dcc/fast && /usr/bin/gmake -f 
CMakeFiles/cmTC_68dcc.dir/build.make CMakeFiles/cmTC_68dcc.dir/build
    gmake[1]: Entering directory 
`/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp'    
Building C object CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o
    /home/mahmood/tools/gcc-6.1.0/bin/gcc    -o 
CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o   -c 
/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp/testCCompiler.c
    /home/mahmood/tools/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/cc1: 
error while loading shared libraries: libmpc.so.3: cannot open shared object 
file: No such file or directory    gmake[1]: *** 
[CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o] Error 1
    gmake[1]: Leaving directory 
`/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp'    
gmake: *** [cmTC_68dcc/fast] Error 2



  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:41 (project)


-- Configuring incomplete, errors occurred!
See also 
"/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeOutput.log".See
 also 
"/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeError.log".
However, the path is correct


$ /home/mahmood/tools/gcc-6.1.0/bin/gcc -vUsing built-in specs.
COLLECT_GCC=/home/mahmood/tools/gcc-6.1.0/bin/gccCOLLECT_LTO_WRAPPER=/home/mahmood/tools/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/lto-wrapperTarget:
 x86_64-pc-linux-gnu
Configured with: ./configure --prefix=/home/mahmood/tools/gcc-6.1.0 
--disable-multilib --enable-languages=c,c++Thread model: posix
gcc version 6.1.0 (GCC)



Any idea about that?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Changing the default cuda path

2019-10-19 Thread Mahmood Naderan
Hi,
I see this line in the cmake output
-- Found CUDA: /usr/local/cuda (found suitable version "10.0", minimum required 
is "7.0")
and I would like to change that default path to somewhere else. May I know how 
to do that?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] c2075 is not detected by gmx

2019-11-24 Thread Mahmood Naderan
Hi
I have build 2018.3 in order to test that with c2075 GPU.
I used this command to build it
$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single 
-DGMX_BUILD_OWN_FFTW=ON 

$ make
$ make install

I have to say that the device is detected according to deviceQuery. However, 
when I run 


$ gmx mdrun -nb gpu -v -deffnm nvt_5k


I get this error

Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.


That is weird, because I also see this message

WARNING: An error occurred while sanity checking device #0; 
cudaErrorMemoryAllocation: out of memory



The device has 6GB of memory and I am sure that my input file doesn't need that 
because I have run it on a GPU with 4GB of memory/

Any idea?

Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] C2075 not detected by gmx 2018.3

2019-11-24 Thread Mahmood Naderan
Hi
I have built 2018.3 with the following command in order to test that with c2075
$ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single 
-DGMX_BUILD_OWN_FFTW=ON 

While deviceQuery shows the device properly, when I run

$ gmx mdrun -nb gpu -v -deffnm nvt

I get this error

Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.


That is weird because I also see

WARNING: An error occurred while sanity checking device #0; 
cudaErrorMemoryAllocation: out of memory



The device has 6GB of memory and I am sure that my input doesn't need all that. 
I have test that with a GPU with 4GB of memory.

Any idea about the error?


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] c2075 is not detected by gmx

2019-11-24 Thread Mahmood Naderan
>Did you install the CUDA toolbox and drivers ?  
 >What is the output from "nvidia-smi" ?

Yes it is working. Please see the full output below


$ nvidia-smi 
Mon Nov 25 08:53:22 2019   
+--+   
| NVIDIA-SMI 352.99 Driver Version: 352.99 |   
|---+--+--+
| GPU  Name    Persistence-M| Bus-Id    Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  Tesla C2075 Off  | :26:00.0  On |  143 |
| 30%   41C    P0    77W / 225W |    221MiB /  5372MiB |  0%  Default |
+---+--+--+
   
+-+
| Processes:   GPU Memory |
|  GPU   PID  Type  Process name   Usage  |
|=|
|  No running processes found |
+-+
mahmood@c2075:~$ 
$ ~/gromacs/gromacs-2018.3/single/bin/gmx mdrun -nb gpu -v -deffnm nvt_5k
  :-) GROMACS - gmx mdrun, 2018.3 (-:

    GROMACS is written by:
 Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. Berendsen
    Par Bjelkmar    Aldert van Buuren   Rudi van Drunen Anton Feenstra  
  Gerrit Groenhof    Aleksei Iupinov   Christoph Junghans   Anca Hamuraru   
 Vincent Hindriksen Dimitrios Karkoulis    Peter Kasson    Jiri Kraus    
  Carsten Kutzner  Per Larsson  Justin A. Lemkul    Viveca Lindahl  
  Magnus Lundborg   Pieter Meulenhoff    Erik Marklund  Teemu Murtola   
    Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov  
   Michael Shirts Alfons Sijbers Peter Tieleman    Teemu Virolainen 
 Christian Wennberg    Maarten Wolf   
   and the project leaders:
    Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.3
Executable:   /home/mahmood/gromacs/gromacs-2018.3/single/bin/gmx
Data prefix:  /home/mahmood/gromacs/gromacs-2018.3/single
Working dir:  /home/mahmood/gromacs
Command line:
  gmx mdrun -nb gpu -v -deffnm nvt_5k


Back Off! I just backed up nvt_5k.log to ./#nvt_5k.log.4#

WARNING: An error occurred while sanity checking device #0; 
cudaErrorMemoryAllocation: out of memory

Compiled SIMD: None, but for this host/run AVX2_128 might be better (see log).
The current CPU can measure timings more accurately than the code in
gmx mdrun was configured to use. This might affect your simulation
speed as accurate timings are needed for load-balancing.
Please consider rebuilding gmx mdrun with the GMX_USE_RDTSCP=ON CMake option.
Reading file nvt_5k.tpr, VERSION 2018.3 (single precision)
Changing nstlist from 20 to 100, rlist from 1.023 to 1.147


Using 1 MPI thread
Using 16 OpenMP threads 


---
Program: gmx mdrun, version 2018.3
Source file: src/programs/mdrun/runner.cpp (line 1001)

Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---









Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Compile for sm_20

2019-11-24 Thread Mahmood Naderan
Hi,
I would like to know what is the last gromacs version that supports sm_20?I can 
recursively find that with try and error. But maybe somewhere that is pointed 
out.


Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] cmake fails with custom gcc version

2019-11-04 Thread Mahmood Naderan
Yes. the compiler is correct. 
However the system is centos 6.9 and it is pretty old and I am sure I will face 
further problems.So, I have to change the system.Anyway, thanks.


Regards,
Mahmood 

On Sunday, November 3, 2019, 10:43:44 AM GMT+3:30, David van der Spoel 
 wrote:  
 
 Den 2019-11-02 kl. 14:25, skrev Mahmood Naderan:
> Hi,
> Although I have specified a custom CC and CXX path, the cmake command fails 
> with an error.
> 
> $ cmake .. -DGMX_GPU=on -DCMAKE_INSTALL_PREFIX=../single61 
> -DGMX_BUILD_OWN_FFTW=ON -DGMX_CUDA_TARGET_SM=61 
> -DCMAKE_C_COMPILER=/home/mahmood/tools/gcc-6.1.0/bin/gcc 
> -DCMAKE_CXX_COMPILER=/home/mahmood/tools/gcc-6.1.0/bin/g++-- The C compiler 
> identification is unknown
> -- The CXX compiler identification is unknown
> -- Check for working C compiler: /home/mahmood/tools/gcc-6.1.0/bin/gcc-- 
> Check for working C compiler: /home/mahmood/tools/gcc-6.1.0/bin/gcc -- 
> brokenCMake Error at 
> /home/mahmood/tools/cmake-3.15.4/share/cmake-3.15/Modules/CMakeTestCCompiler.cmake:60
>  (message):  The C compiler
> 
>      "/home/mahmood/tools/gcc-6.1.0/bin/gcc"
>    is not able to compile a simple test program.

Have you verified that the compiler is installed correctly, e.g. by 
compiling a small test program?
Is /home/mahmood/tools/gcc-6.1.0/bin/ in your PATH?

> 
>    It fails with the following output:
> 
>      Change Dir: 
>/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp
>      Run Build Command(s):/usr/bin/gmake cmTC_68dcc/fast && /usr/bin/gmake -f 
>CMakeFiles/cmTC_68dcc.dir/build.make CMakeFiles/cmTC_68dcc.dir/build
>      gmake[1]: Entering directory 
>`/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp'    
>Building C object CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o
>      /home/mahmood/tools/gcc-6.1.0/bin/gcc    -o 
>CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o   -c 
>/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp/testCCompiler.c
>    /home/mahmood/tools/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/cc1: 
>error while loading shared libraries: libmpc.so.3: cannot open shared object 
>file: No such file or directory    gmake[1]: *** 
>[CMakeFiles/cmTC_68dcc.dir/testCCompiler.c.o] Error 1
>      gmake[1]: Leaving directory 
>`/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeTmp'    
>gmake: *** [cmTC_68dcc/fast] Error 2
> 
> 
> 
>    CMake will not be able to correctly generate this project.
> Call Stack (most recent call first):
>    CMakeLists.txt:41 (project)
> 
> 
> -- Configuring incomplete, errors occurred!
> See also 
> "/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeOutput.log".See
>  also 
> "/home/mahmood/cactus/gromacs/gromacs-2019.4/build/CMakeFiles/CMakeError.log".
> However, the path is correct
> 
> 
> $ /home/mahmood/tools/gcc-6.1.0/bin/gcc -vUsing built-in specs.
> COLLECT_GCC=/home/mahmood/tools/gcc-6.1.0/bin/gccCOLLECT_LTO_WRAPPER=/home/mahmood/tools/gcc-6.1.0/libexec/gcc/x86_64-pc-linux-gnu/6.1.0/lto-wrapperTarget:
>  x86_64-pc-linux-gnu
> Configured with: ./configure --prefix=/home/mahmood/tools/gcc-6.1.0 
> --disable-multilib --enable-languages=c,c++Thread model: posix
> gcc version 6.1.0 (GCC)
> 
> 
> 
> Any idea about that?
> 
> Regards,
> Mahmood
> 


-- 
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Cannot run short-ranged nonbonded interactions on a GPU because there is none detected.

2020-04-11 Thread Mahmood Naderan
Hi
Although I have built gromacs for 1080Ti and the device is working properly, I 
get this error when running gmx command

$ ./gromacs-2019.4-1080ti/single/bin/gmx mdrun -nb gpu -v -deffnm nvt_5k
 .
GROMACS:  gmx mdrun, version 2019.4
Executable:   
/storage/users/mnaderan/gromacs/./gromacs-2019.4-1080ti/single/bin/gmx
Data prefix:  /storage/users/mnaderan/gromacs/./gromacs-2019.4-1080ti/single
Working dir:  /storage/users/mnaderan/gromacs
Command line:
  gmx mdrun -nb gpu -v -deffnm nvt_5k


Back Off! I just backed up nvt_5k.log to ./#nvt_5k.log.1#
Reading file nvt_5k.tpr, VERSION 2019.3 (single precision)
Changing nstlist from 20 to 100, rlist from 1.023 to 1.147

Using 32 MPI threads
Using 1 OpenMP thread per tMPI thread

---
Program: gmx mdrun, version 2019.4
Source file: src/gromacs/mdrun/runner.cpp (line 1041)
MPI rank:    20 (out of 32)

Fatal error:
Cannot run short-ranged nonbonded interactions on a GPU because there is none
detected.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

---
Program: gmx mdrun, version 2019.4





As I said, the device is working properly



$ echo $CUDA_VISIBLE_DEVICES
0

$ ~/NVIDIA_CUDA-10.1_Samples/1_Utilities/deviceQuery/deviceQuery
/storage/users/mnaderan/NVIDIA_CUDA-10.1_Samples/1_Utilities/deviceQuery/deviceQuery
 Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1080 Ti"
  CUDA Driver Version / Runtime Version  10.0 / 10.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory: 11178 MBytes (11721506816 
bytes)
  (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
  GPU Max Clock rate:    1683 MHz (1.68 GHz)
  Memory Clock rate: 5505 Mhz





The configure command was


cmake .. -DGMX_BUILD_OWN_FFTW=ON 
-DCMAKE_INSTALL_PREFIX=/storage/users/mnaderan/gromacs/gromacs-2019.4-1080ti/single
 -DGMX_GPU=on -DGMX_CUDA_TARGET_SM=61


Any idea for fixing that?



Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Disabling MKL

2020-04-17 Thread Mahmood Naderan
Hi
How can I disable MKL while building gromacs? With this configure command

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=on  -DGMX_FFT_LIBRARY=fftw3



I see

-- The GROMACS-managed build of FFTW 3 will configure with the following 
optimizations: --enable-sse2;--enable-avx;--enable-avx2
-- Using external FFT library - FFTW3 build managed by GROMACS
-- Looking for sgemm_
-- Looking for sgemm_ - not found
-- Looking for sgemm_
-- Looking for sgemm_ - found
-- Found BLAS: 
/share/binary/intel/composer_xe_2015.0.090/mkl/lib/intel64/libmkl_intel_lp64.so;/share/binary/intel/composer_xe_2015.0.090/mkl/lib/intel64/libmkl_intel_thread.so;/share/binary/intel/composer_xe_2015.0.090/mkl/lib/intel64/libmkl_core.so;/opt/intel/lib/intel64/libguide.so;-lpthread;-lm;-ldl





Then I get these errors

[100%] Linking CXX executable ../../bin/gmx
[100%] Linking CXX executable ../../bin/template
/bin/ld: warning: libmkl_intel_lp64.so, needed by 
../../lib/libgromacs.so.4.0.0, not found (try using -rpath or -rpath-link)
/bin/ld: warning: libmkl_intel_thread.so, needed by 
../../lib/libgromacs.so.4.0.0, not found (try using -rpath or -rpath-link)
/bin/ld: warning: libmkl_core.so, needed by ../../lib/libgromacs.so.4.0.0, not 
found (try using -rpath or -rpath-link)
/bin/ld: warning: libguide.so, needed by ../../lib/libgromacs.so.4.0.0, not 
found (try using -rpath or -rpath-link)
../../lib/libgromacs.so.4.0.0: undefined reference to `ssteqr_'
../../lib/libgromacs.so.4.0.0: undefined reference to `dsteqr_'
../../lib/libgromacs.so.4.0.0: undefined reference to `sger_'






Regards,
Mahmood
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.