Re: [gmx-users] problem: gromacs run on gpu

2017-07-13 Thread Mark Abraham
Hi,

Probably you have some strange character after "on" if you edited the file
on Windows or pasted the line from elsewhere

Mark

On Thu, 13 Jul 2017 07:22 Alex  wrote:

> Can you try to open the script in vi, delete the mdrun line and then
> manually retype it?
>
>
> On 7/12/2017 11:03 PM, leila karami wrote:
> > Dear Gromacs users,
> >
> > I am doing md simulation on Gromacs 5.1.3. on GPU in Rocks cluster
> system using
> > command:
> >
> > gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on
> >
> > All things are ok.
> >
> > When I use this command in a script to do md simulation by queuing
> system:
> >
> >
> -
> > #!/bin/bash
> > #$ -S /bin/bash
> > #$ -q gpu.q
> > #$ -cwd
> > #$ -N cell_1
> > #$ -e error_1.dat
> > #$ -o output_1.dat
> > echo "Job started at date"
> > gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on
> > echo "Job Ended at date"
> >
> -
> >
> > I encountered with following error:
> >
> > Program: gmx mdrun, VERSION 5.1.3
> > Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
> > Function:void gmx::CommandLineParser::parse(int*, char**)
> >
> > Error in user input:
> > Invalid command-line options
> >In command-line option -pin
> >  Invalid value: on
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> > Halting program gmx mdrun
> >
> --
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> >
> ---
> >
> > How to resolve this error?
> > Any help will be highly appreciated
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] problem: gromacs run on gpu

2017-07-12 Thread Alex
Can you try to open the script in vi, delete the mdrun line and then 
manually retype it?



On 7/12/2017 11:03 PM, leila karami wrote:

Dear Gromacs users,

I am doing md simulation on Gromacs 5.1.3. on GPU in Rocks cluster system using
command:

gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on

All things are ok.

When I use this command in a script to do md simulation by queuing system:

-
#!/bin/bash
#$ -S /bin/bash
#$ -q gpu.q
#$ -cwd
#$ -N cell_1
#$ -e error_1.dat
#$ -o output_1.dat
echo "Job started at date"
gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on
echo "Job Ended at date"
-

I encountered with following error:

Program: gmx mdrun, VERSION 5.1.3
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:
Invalid command-line options
   In command-line option -pin
 Invalid value: on

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting program gmx mdrun
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

---

How to resolve this error?
Any help will be highly appreciated


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] problem: gromacs run on gpu

2017-07-12 Thread leila karami
Dear Gromacs users,

I am doing md simulation on Gromacs 5.1.3. on GPU in Rocks cluster system using
command:

gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on

All things are ok.

When I use this command in a script to do md simulation by queuing system:

-
#!/bin/bash
#$ -S /bin/bash
#$ -q gpu.q
#$ -cwd
#$ -N cell_1
#$ -e error_1.dat
#$ -o output_1.dat
echo "Job started at date"
gmx_mpi mdrun -nb gpu -v -deffnm gpu -ntomp 16 -gpu_id 0 -pin on
echo "Job Ended at date"
-

I encountered with following error:

Program: gmx mdrun, VERSION 5.1.3
Source file: src/gromacs/commandline/cmdlineparser.cpp (line 234)
Function:void gmx::CommandLineParser::parse(int*, char**)

Error in user input:
Invalid command-line options
  In command-line option -pin
Invalid value: on

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
Halting program gmx mdrun
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

---

How to resolve this error?
Any help will be highly appreciated
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] problem: gromacs run on gpu

2017-07-11 Thread Mark Abraham
Hi,

Making your run stay to the cores it is assigned is always a good idea, and
using -pin on is a good way to do it. If there's more than that job on the
node, then it is more complicated than that. More information here
http://manual.gromacs.org/documentation/2016.3/user-guide/mdrun-performance.html
.

Mark

On Sat, Jul 8, 2017 at 10:25 PM leila karami 
wrote:

> *Dear Szilárd ,*
> *Thanx for your answer. *
>
>
> *For the following command, should I use -pin on?*
>
>  gmx_mpi mdrun -nb gpu -v -deffnm  gpu_md -ntomp 32 -gpu_id 0
>
> Best wishes
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] problem: gromacs run on gpu

2017-07-08 Thread leila karami
*Dear Szilárd ,*
*Thanx for your answer. *


*For the following command, should I use -pin on?*

 gmx_mpi mdrun -nb gpu -v -deffnm  gpu_md -ntomp 32 -gpu_id 0

Best wishes
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem: gromacs run on gpu

2017-07-08 Thread Szilárd Páll
Leila,

If you want to use only one GPU, pass that GPU's ID to mdrun, e.g. -gpu_id
0 for the 1st one. You'll also want to pick the right number of cores for
the run, it will surely not make sense to use all 96. Also make sure to pin
the threads (-pin on).

However, I strongly recommend that you read the documentation and wiki and
check the examples to understand how to correctly launch mdrun and how to
understand performance and tune it. You should especially be careful if you
intend to share the node with others or you want to do multiple runs
side-by-side.

--
Szilárd

On Fri, Jul 7, 2017 at 10:55 PM, leila karami 
wrote:

> *Dear Nikhil and Szilárd,*
> Thanks for your answers.
>
> I want to use only one of GPUs (for example ID =1).
> Should I use option -gpu_id 1?
>
> Information of my system hardware is as follows:
>
> Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
>
> Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0).
>
> Since there is 1 node in my system, should I use both of -ntmpi and
> -ntomp or just -ntomp?
>
> Should I obtain optimum values for -ntmpi and -ntomp using trial and
> error? Is not there another way?
>
> Best wishes.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] problem: gromacs run on gpu

2017-07-07 Thread leila karami
*Dear Nikhil and Szilárd,*
Thanks for your answers.

I want to use only one of GPUs (for example ID =1).
Should I use option -gpu_id 1?

Information of my system hardware is as follows:

Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs

Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0).

Since there is 1 node in my system, should I use both of -ntmpi and
-ntomp or just -ntomp?

Should I obtain optimum values for -ntmpi and -ntomp using trial and
error? Is not there another way?

Best wishes.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem: gromacs run on gpu

2017-07-07 Thread Szilárd Páll
You've got a pretty strange beast there with 4 CPU sockets 24 cores each,
one very fast GPU and two rather slow ones (about 3x slower than the first).

If you want to do a single run on this machine, I suggest trying to
partition the rank across the GPUs so that you get a decent balance, e.g.
you can try
- run 12 ranks 8 threads each, 6 using GPU0, 3-3 for GPU 1 and 2 or
- 16 ranks 6-12 threads each, 10/3/3 rank on GPU 0/1/2, resp.





--
Szilárd

On Fri, Jul 7, 2017 at 1:13 PM, leila karami 
wrote:

> Dear Gromacs users,
>
> I installed Gromacs 5.1.3. on GPU in Rocks cluster system.
>
> After using command:
>
> gmx_mpi mdrun -nb gpu -v -deffnm  old_gpu,
>
> I encountered with:
> =
> GROMACS:  gmx mdrun, VERSION 5.1.3
> Executable:   /home/karami_leila1/513/gromacs/bin/gmx_mpi
> Data prefix:  /home/karami_leila1/513/gromacs
> Command line:
>   gmx_mpi mdrun -nb gpu -v -deffnm new_gpu
>
>
> Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
> Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0):
>   CPU info:
> Vendor: GenuineIntel
> Brand:  Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz
> SIMD instructions most likely to fit this hardware: AVX2_256
> SIMD instructions selected at GROMACS compile time: AVX2_256
>   GPU info:
> Number of GPUs detected: 3
> #0: NVIDIA TITAN X (Pascal), compute cap.: 6.1, ECC:  no, stat:
> compatible
> #1: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible
> #2: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible
>
> Reading file new_gpu.tpr, VERSION 5.1.3 (single precision)
> Using 1 MPI process
> Using 192 OpenMP threads
>
> 3 compatible GPUs are present, with IDs 0,1,2
> 1 GPU auto-selected for this run.
> Mapping of GPU ID to the 1 PP rank in this node: 0
>
>
> NOTE: potentially sub-optimal launch configuration, gmx_mpi started with
> less
>   PP MPI process per node than GPUs available.
>   Each PP MPI process can use only one GPU, 1 GPU per node will be
> used.
>
>
> ---
> Program gmx mdrun, VERSION 5.1.3
> Source code file:
> /root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/
> resource-division.cpp,
> line: 571
>
> Fatal error:
> Your choice of 1 MPI rank and the use of 192 total threads leads to the use
> of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
> ranks with 2 to 6 OpenMP threads. If you want to run with this many OpenMP
> threads, specify the -ntomp option. But we suggest to increase the number
> of MPI ranks.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
> Halting program gmx mdrun
> --
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> =
>
> How to resolve this problem?
>
> Any help will be highly appreciated.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem: gromacs run on gpu

2017-07-07 Thread Nikhil Maroli
Hi,

you can try using
ntmpi XX  ntomp XXX and try to use the combinations for more details.
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html.
Further, I think it's better to use 2 x Tesla K40 instead of using all
three. You may see a performance reduction due to load imbalance and for
optimum to be with more MPI

ranks with 2 to 6 OpenMP threads.

You have to see all the combinations to get better performance.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] problem: gromacs run on gpu

2017-07-07 Thread leila karami
Dear Gromacs users,

I installed Gromacs 5.1.3. on GPU in Rocks cluster system.

After using command:

gmx_mpi mdrun -nb gpu -v -deffnm  old_gpu,

I encountered with:
=
GROMACS:  gmx mdrun, VERSION 5.1.3
Executable:   /home/karami_leila1/513/gromacs/bin/gmx_mpi
Data prefix:  /home/karami_leila1/513/gromacs
Command line:
  gmx_mpi mdrun -nb gpu -v -deffnm new_gpu


Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0):
  CPU info:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256
  GPU info:
Number of GPUs detected: 3
#0: NVIDIA TITAN X (Pascal), compute cap.: 6.1, ECC:  no, stat:
compatible
#1: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible
#2: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible

Reading file new_gpu.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 192 OpenMP threads

3 compatible GPUs are present, with IDs 0,1,2
1 GPU auto-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0


NOTE: potentially sub-optimal launch configuration, gmx_mpi started with
less
  PP MPI process per node than GPUs available.
  Each PP MPI process can use only one GPU, 1 GPU per node will be used.


---
Program gmx mdrun, VERSION 5.1.3
Source code file:
/root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
line: 571

Fatal error:
Your choice of 1 MPI rank and the use of 192 total threads leads to the use
of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
ranks with 2 to 6 OpenMP threads. If you want to run with this many OpenMP
threads, specify the -ntomp option. But we suggest to increase the number
of MPI ranks.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Halting program gmx mdrun
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
=

How to resolve this problem?

Any help will be highly appreciated.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.