Re: [gmx-users] Domain decomposition error while running coarse grained simulations on cluster

2019-09-04 Thread Justin Lemkul




On 9/3/19 12:21 PM, Avijeet Kulshrestha wrote:

Hi Justin,
Thanks for replying to my query. Please see the error message below of log
file.
I have user-defined bonded, angle potential which I am providing by
supplying tabulated data. I have position restraint also on backbone atoms
of the protein that is only in the equilibration steps.

*This is the error message of the log file. *
Initializing Domain Decomposition on 8 ranks
Dynamic load balancing: off
Minimum cell size due to atom displacement: 0.546 nm
Initial maximum inter charge-group distances:
 two-body bonded interactions: 12.145 nm, LJ-14, atoms 11 568


Here's your problem. You have pairs defined that are in excess of 12 nm, 
but they are assigned to a 1-4 interaction, so atoms that should be 
separated by three bonds. The user-defined potential shouldn't matter 
here unless you've added [pairs] to the topology.


-Justin


   multi-body bonded interactions: 1.124 nm, G96Angle, atoms 3767 3770
Minimum cell size due to bonded interactions: 13.359 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 1.360 nm
Estimated maximum distance required for P-LINCS: 1.360 nm
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 8 cells with a minimum initial size of 16.699 nm
The maximum allowed number of cells is: X 0 Y 0 Z 1

---
Program: gmx mdrun, version 2018.6
Source file: src/gromacs/domdec/domdec.cpp (line 6594)
MPI rank:0 (out of 8)

Fatal error:
There is no domain decomposition for 8 ranks that is compatible with the
given
box and a minimum cell size of 16.6989 nm
Change the number of ranks or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition

Please let me know what I can do to rectify it.

On Mon, 2 Sep 2019 at 12:25, <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:


Send gromacs.org_gmx-users mailing list submissions to
 gromacs.org_gmx-users@maillist.sys.kth.se

To subscribe or unsubscribe via the World Wide Web, visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or, via email, send a message with subject or body 'help' to
 gromacs.org_gmx-users-requ...@maillist.sys.kth.se

You can reach the person managing the list at
 gromacs.org_gmx-users-ow...@maillist.sys.kth.se

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gromacs.org_gmx-users digest..."


Today's Topics:

1. Re: simulation termination problem (Prabir Khatua)
2. Re: wham analysis (Justin Lemkul)
3. Re: Domain decomposition error while running coarse grained
   simulations on cluster (Justin Lemkul)
4. Re: mdrun error (Justin Lemkul)
5. regarding changing the scale from ps to ns (sudha bhagwati)


--

Message: 1
Date: Sun, 1 Sep 2019 12:10:04 -0500
From: Prabir Khatua 
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] simulation termination problem
Message-ID:
 <
caobzxoogmnzmky8ddreyhfh7cjm9ycppo3fg6dmoe5fn-mg...@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Thanks Justin. The problem has been fixed.

On Fri, Aug 30, 2019 at 7:31 AM Justin Lemkul  wrote:



On 8/29/19 12:31 PM, Prabir Khatua wrote:

Hello Gromacs users,

I am trying to simulate a system of atom size 3,58,973 in gromacs

5.1.5.

However, my simulation is being terminated in between with the

following

error.

File input/output error:
Cannot rename checkpoint file; maybe you are out of disk space?
For more information and tips for troubleshooting, please check the

GROMACS

website at http://www.gromacs.org/Documentation/Errors

I did not find any solution with respect to the error I was having in

the

mentioned website. What I found was related to memory issue. I do not

know

whether this is the same issue.

The issue is not related to memory, it is (potentially) related to disk
space. Do you have enough space on the filesystem to write output files?
This can also happen sometimes when the filesystem blips. There's not
much you can do about that except complain to your sysadmin about
integrity of the filesystem.

-Justin


Please note that I was successfully able to run another simulation of a
system having relatively less number of atoms with same script. The run
command that I used for the simulation was

mpirun -np 48 gmx_mpi mdrun -ntomp 1 -deffnm npt

I ran both the simulations on two nodes having 24 cpu cores in each one

of

the nodes.
I am also not able to figure out one issue. The log file of the system
where the simulation was successfully completed showed

Running on 2 nodes with total 48 cores, 48 logical cores, 0 compatible

GPUs

Cores per node:   24
Logical cores per node:   24
Compatible GPUs per node:  0

However, in the unsuccessful case, the log file showed

Running on 1 node with total 24 cores, 24 

Re: [gmx-users] The problem of utilizing multiple GPU

2019-09-04 Thread 孙业平
Hello Mark Abraham,

Thank you very much for your reply. I will definitely check the webinar and 
gromacs document. But now I am confused and expect an direct solution. The 
workstation should have 18 cores each with 4 hyperthreads. The output of 
"lscpu" reads:
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):72
On-line CPU(s) list:   0-71
Thread(s) per core:2
Core(s) per socket:18
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 85
Model name:Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz
Stepping:  4
CPU MHz:   2701.000
CPU max MHz:   2701.
CPU min MHz:   1200.
BogoMIPS:  5400.00
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  1024K
L3 cache:  25344K
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71

Now I don't want to do multiple simulations and just want to run a single 
simulation. When assigning the simulation to only one GPU (gmx mdrun -v -gpu_id 
0 -deffnm md), the simulation performance is 90 ns/day. However, when I don't 
assign the GPU but let all GPU work by:
   gmx mdrun -v -deffnm md
The simulation performance is only 2 ns/day.

So what is correct command to make a full use of all GPUs and achieve the best 
performance (which I expect should be much higher than 90 ns/day with only one 
GPU)? Could you give me further suggestions and help? 

Best regards,
Yeping
 
--
From:Mark Abraham 
Sent At:2019 Sep. 4 (Wed.) 19:10
To:gromacs ; 孙业平 
Cc:gromacs.org_gmx-users 
Subject:Re: [gmx-users] The problem of utilizing multiple GPU

Hi,


On Wed, 4 Sep 2019 at 12:54, sunyeping  wrote:
Dear everyone,

 I am trying to do simulation with a workstation with 72 core and 8 geforce 
1080 GPUs.

72 cores, or just 36 cores each with two hyperthreads? (it matters because you 
might not want to share cores between simulations, which is what you'd get if 
you just assigned 9 hyperthreads per GPU and 1 GPU per simulation).

 When I do not assign a certain GPU with the command:
   gmx mdrun -v -deffnm md
 all GPUs are used and but the utilization of each GPU is extremely low (only 
1-2 %), and the simulation will be finished after several months.  

Yep. Too many workers for not enough work means everyone spends time more time 
coordinating than working. This is likely to improve in GROMACS 2020 (beta out 
shortly).

 In contrast, when I assign the simulation task to only one GPU:
 gmx mdrun -v -gpu_id 0 -deffnm md
 the GPU utilization can reach 60-70%, and the simulation can be finished 
within a week. Even when I use only two GPU:

Utilization is only a proxy - what you actually want to measure is the rate of 
simulation ie. ns/day.

  gmx mdrun -v -gpu_id 0,2 -deffnm md

 the GPU utilizations are very low and the simulation is very slow.

That could be for a variety of reasons, which you could diagnose by looking at 
the performance report at the end of the log file, and comparing different runs.
 I think I may missuse the GPU for gromacs simulation. Could you tell me what 
is the correct way to use multiple GPUs?

If you're happy running multiple simulations, then the easiest thing to do is 
to use the existing multi-simulation support to do

mpirun -np 8 gmx_mpi -multidir dir0 dir1 dir2 ... dir7 

and let mdrun handle the details. Otherwise you have to get involved in 
assigning a subset of the CPU cores and GPUs to each job that both runs fast 
and does not conflict. See the documentation for GROMACS for the version you're 
running e.g. 
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#running-mdrun-within-a-single-node.

You probably want to check out this webinar tomorrow 
https://bioexcel.eu/webinar-more-bang-for-your-buck-improved-use-of-gpu-nodes-for-gromacs-2018-2019-09-05/.

Mark
 Best regards
 -- 
 Gromacs Users mailing list

 * Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Question about the pulling code

2019-09-04 Thread Tingguang.S
Dear All,


I want to pulling a ligand out of the binding pocket using Gromacs 2019. The 
pulling code was:
pull= yes
pull_ncoords= 1 

pull_coord1_type= umbrella
pull_coord1_geometry= direction
pull_ngroups= 1
pull_group1_name= Ligand
pull_coord1_dim = Y N N
pull-coord1-vec = 1 0 0

pull_coord1_rate= 0.0005 
pull_coord1_k   = 830



I think the reaction coordinate had been defined by COM of the pulling group 
(i.e the ligand) and the pulling direction vector (i.e 1 0 0), but why still 
need to provide two groups for pull_coord1_groups ? If provided, then the 
reaction coordinate was also defined by the COM of the two groups, is this 
right?


How should I set up my pulling code with "pull_coord1_geometry= direction". 
Any suggestion would be appreciated!


Best regards
Sting








-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] how to define a pair

2019-09-04 Thread nahren manuel
Hi,
I am performed a all-atom simulation of a membrane-protein system (the starting 
structure received from CHARMM-GUI server, using charmm36 FF). 
#include "toppar/charmm36.itp"
  [ defaults ]; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ1 2 yes 1.0 1.0

I want to add a pair between list of two atoms, say 10-500 (CA atoms). Lets say 
the distance between them is 0.800 nm. So I define my pairs in the following 
way,

10     500    1     (4*2.5*0.800**6)    (4*2.5*0.800**12) ; where 2.5 is my 
epsilon. I get some error when I run the simulation (infinite force, exploding 
simulation). I suspect my definition of C6 and C12 is wrong. Any suggestions 
would be useful.
-manuel
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] how to define a pair

2019-09-04 Thread nahren manuel

Hi, I am performed a all-atom simulation of a membrane-protein system (the 
starting structure received from CHARMM-GUI server, using charmm36 FF). 
#include "toppar/charmm36.itp" [ defaults ] ; nbfunc comb-rule gen-pairs 
fudgeLJ fudgeQQ 1 2 yes 1.0 1.0

I want to add a pair between list of two atoms, say 10-500 (CA atoms). Lets say 
the distance between them is 0.800 nm. So I define my pairs in the following 
way, 10 500 1 (4*2.5*0.800**6) (4*2.5*0.800**12) ; where 2.5 is my epsilon. I 
get some error when I run the simulation (infinite force, exploding 
simulation). I suspect my definition of C6 and C12 is wrong. 
Any suggestions would be useful -manuel
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] The problem of utilizing multiple GPU

2019-09-04 Thread Mark Abraham
Hi,


On Wed, 4 Sep 2019 at 12:54, sunyeping  wrote:

> Dear everyone,
>
> I am trying to do simulation with a workstation with 72 core and 8 geforce
> 1080 GPUs.
>

72 cores, or just 36 cores each with two hyperthreads? (it matters because
you might not want to share cores between simulations, which is what you'd
get if you just assigned 9 hyperthreads per GPU and 1 GPU per simulation).


> When I do not assign a certain GPU with the command:
>   gmx mdrun -v -deffnm md
> all GPUs are used and but the utilization of each GPU is extremely low
> (only 1-2 %), and the simulation will be finished after several months.
>

Yep. Too many workers for not enough work means everyone spends time more
time coordinating than working. This is likely to improve in GROMACS 2020
(beta out shortly).

In contrast, when I assign the simulation task to only one GPU:
> gmx mdrun -v -gpu_id 0 -deffnm md
> the GPU utilization can reach 60-70%, and the simulation can be finished
> within a week. Even when I use only two GPU:
>

Utilization is only a proxy - what you actually want to measure is the rate
of simulation ie. ns/day.

 gmx mdrun -v -gpu_id 0,2 -deffnm md
>
> the GPU utilizations are very low and the simulation is very slow.
>

That could be for a variety of reasons, which you could diagnose by looking
at the performance report at the end of the log file, and comparing
different runs.


> I think I may missuse the GPU for gromacs simulation. Could you tell me
> what is the correct way to use multiple GPUs?
>

If you're happy running multiple simulations, then the easiest thing to do
is to use the existing multi-simulation support to do

mpirun -np 8 gmx_mpi -multidir dir0 dir1 dir2 ... dir7

and let mdrun handle the details. Otherwise you have to get involved in
assigning a subset of the CPU cores and GPUs to each job that both runs
fast and does not conflict. See the documentation for GROMACS for the
version you're running e.g.
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#running-mdrun-within-a-single-node
.

You probably want to check out this webinar tomorrow
https://bioexcel.eu/webinar-more-bang-for-your-buck-improved-use-of-gpu-nodes-for-gromacs-2018-2019-09-05/
.

Mark


> Best regards
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] The problem of utilizing multiple GPU

2019-09-04 Thread sunyeping
Dear everyone,

I am trying to do simulation with a workstation with 72 core and 8 geforce 1080 
GPUs.
When I do not assign a certain GPU with the command:
  gmx mdrun -v -deffnm md
all GPUs are used and but the utilization of each GPU is extremely low (only 
1-2 %), and the simulation will be finished after several months.  
In contrast, when I assign the simulation task to only one GPU:
gmx mdrun -v -gpu_id 0 -deffnm md
the GPU utilization can reach 60-70%, and the simulation can be finished within 
a week. Even when I use only two GPU:
 gmx mdrun -v -gpu_id 0,2 -deffnm md

the GPU utilizations are very low and the simulation is very slow.

I think I may missuse the GPU for gromacs simulation. Could you tell me what is 
the correct way to use multiple GPUs?

Best regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Thank you for your email sir.

On Wed, Sep 4, 2019 at 2:42 PM Mark Abraham 
wrote:

> Hi,
>
> On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Respected Mark Abraham,
> >   The command-line and the job
> > submission script is given below
> >
> > #!/bin/bash
> > #SBATCH -n 130 # Number of cores
> >
>
> Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
> to run. It's not a core request.
>
> #SBATCH -N 5   # no of nodes
> >
>
> This requires a certain number of nodes. So to implement both your
> instructions, MPI has to start 26 tasks per node. That would make sense if
> you had nodes with a multiple 26 cores. My guess is that your nodes have a
> multiple of 16 cores, based on the error message. MPI saw that you asked to
> allocate more tasks on cores than available cores, and decided not to set a
> number of OpenMP threads per MPI task, so that fell back on a default,
> which produced 16, which GROMACS can see doesn't make sense.
>
> If you want to use -N and -n, then you need to make a choice that makes
> sense for the number of cores per node. Easier might be to use -n 130 and
> -c 2 to express what I assume is your intent to have 2 cores per MPI task.
> Now slurm+MPI can pass that message along properly to OpenMP.
>
> Your other message about -ntomp can only have come from running gmx_mpi_d
> -ntmpi, so just a typo we don't need to worry about further.
>
> Mark
>
> #SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> > #SBATCH -p cpu # Partition to submit to
> > #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> > #SBATCH -e hostname_%j.err # File to which STDERR will be written
> > #loading gromacs
> > module load gromacs/2018.4
> > #specifying work_dir
> > WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
> >
> >
> > mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> > equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> > equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> > equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> > equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> > equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> > equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> > equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> > -deffnm remd_nvt -cpi remd_nvt.cpt -append
> >
> > On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > We need to see your command line in order to have a chance of helping.
> > >
> > > Mark
> > >
> > > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> > 177cy500.bra...@nitk.edu.in
> > > >
> > > wrote:
> > >
> > > > Dear all,
> > > > I am running one REMD simulation with 65 replicas. I am
> > using
> > > > 130 cores for the simulation. I am getting the following error.
> > > >
> > > > Fatal error:
> > > > Your choice of number of MPI ranks and amount of resources results in
> > > using
> > > > 16
> > > > OpenMP threads per rank, which is most likely inefficient. The
> optimum
> > is
> > > > usually between 1 and 6 threads per rank. If you want to run with
> this
> > > > setup,
> > > > specify the -ntomp option. But we suggest to change the number of MPI
> > > > ranks.
> > > >
> > > > when I am using -ntomp option ...it is throwing another error
> > > >
> > > > Fatal error:
> > > > Setting the number of thread-MPI ranks is only supported with
> > thread-MPI
> > > > and
> > > > GROMACS was compiled without thread-MPI
> > > >
> > > >
> > > > while GROMACS is compiled with threated-MPI...
> > > >
> > > > plerase help me in this regard.
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > 

[gmx-users] Channelrhodopsin topology

2019-09-04 Thread vicolls

Hello everyone,

I got some troubles with topology setup for channelrhodopsin setup in  
membrane.  It is first such difficult system I met at my short  
scientific way and in fact it blocks my progress. I am not sure if  
gromacs mailing list is the way I should  call for help, but perhaps I  
will find some answers here.


Channelrhodopsin consists of protein, ligand (retinal) bounded to  
protein and there should be membrane. I want to make simulations in  
charmm ff.  I found nice topology created by prof. Jochen Hub, but  
unfortunatly  it is in amber ff.


I have attached protein  to membrane via charmm-gui, but I  had to cut  
off the retinal  (charmm  gui couldn't handle ligand conected to  
protein). So I get nice structure and topology  for both  protein  and  
membrane. Later I wanted to add retinal to  the system - I made it  
manually, by coppying coordinates to gro file and getting topology to   
retinal from swiss-param.


There are two  problems - one is retinal and protein  lives on their  
own. They do't see each other. On tail of retinal and tail of Lys296  
(to which retinal is connected) there are too many hydrgens instead of  
bond. I just deleted it manually, but still I don't have parameters  
for this particular atoms. I don't know where to find them, how to get  
them.


Also I have some mess in  topology, beacuse some parts come from  
charmm-gui and retinal from swiss param. Any ideas how to make nice  
and managable topology files? And how should I connect retinal with  
lys296?


With best regards,
Wiktor

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Respected Mark Abraham,
>   The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>

Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
to run. It's not a core request.

#SBATCH -N 5   # no of nodes
>

This requires a certain number of nodes. So to implement both your
instructions, MPI has to start 26 tasks per node. That would make sense if
you had nodes with a multiple 26 cores. My guess is that your nodes have a
multiple of 16 cores, based on the error message. MPI saw that you asked to
allocate more tasks on cores than available cores, and decided not to set a
number of OpenMP threads per MPI task, so that fell back on a default,
which produced 16, which GROMACS can see doesn't make sense.

If you want to use -N and -n, then you need to make a choice that makes
sense for the number of cores per node. Easier might be to use -n 130 and
-c 2 to express what I assume is your intent to have 2 cores per MPI task.
Now slurm+MPI can pass that message along properly to OpenMP.

Your other message about -ntomp can only have come from running gmx_mpi_d
-ntmpi, so just a typo we don't need to worry about further.

Mark

#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> #SBATCH -p cpu # Partition to submit to
> #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> #SBATCH -e hostname_%j.err # File to which STDERR will be written
> #loading gromacs
> module load gromacs/2018.4
> #specifying work_dir
> WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
>
>
> mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> -deffnm remd_nvt -cpi remd_nvt.cpt -append
>
> On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > We need to see your command line in order to have a chance of helping.
> >
> > Mark
> >
> > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> 177cy500.bra...@nitk.edu.in
> > >
> > wrote:
> >
> > > Dear all,
> > > I am running one REMD simulation with 65 replicas. I am
> using
> > > 130 cores for the simulation. I am getting the following error.
> > >
> > > Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 16
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks.
> > >
> > > when I am using -ntomp option ...it is throwing another error
> > >
> > > Fatal error:
> > > Setting the number of thread-MPI ranks is only supported with
> thread-MPI
> > > and
> > > GROMACS was compiled without thread-MPI
> > >
> > >
> > > while GROMACS is compiled with threated-MPI...
> > >
> > > plerase help me in this regard.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Respected Mark Abraham,
  The command-line and the job
submission script is given below

#!/bin/bash
#SBATCH -n 130 # Number of cores
#SBATCH -N 5   # no of nodes
#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
#SBATCH -p cpu # Partition to submit to
#SBATCH -o hostname_%j.out # File to which STDOUT will be written
#SBATCH -e hostname_%j.err # File to which STDERR will be written
#loading gromacs
module load gromacs/2018.4
#specifying work_dir
WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1


mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
-deffnm remd_nvt -cpi remd_nvt.cpt -append

On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> We need to see your command line in order to have a chance of helping.
>
> Mark
>
> On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Dear all,
> > I am running one REMD simulation with 65 replicas. I am using
> > 130 cores for the simulation. I am getting the following error.
> >
> > Fatal error:
> > Your choice of number of MPI ranks and amount of resources results in
> using
> > 16
> > OpenMP threads per rank, which is most likely inefficient. The optimum is
> > usually between 1 and 6 threads per rank. If you want to run with this
> > setup,
> > specify the -ntomp option. But we suggest to change the number of MPI
> > ranks.
> >
> > when I am using -ntomp option ...it is throwing another error
> >
> > Fatal error:
> > Setting the number of thread-MPI ranks is only supported with thread-MPI
> > and
> > GROMACS was compiled without thread-MPI
> >
> >
> > while GROMACS is compiled with threated-MPI...
> >
> > plerase help me in this regard.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

We need to see your command line in order to have a chance of helping.

Mark

On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am getting the following error.
>
> Fatal error:
> Your choice of number of MPI ranks and amount of resources results in using
> 16
> OpenMP threads per rank, which is most likely inefficient. The optimum is
> usually between 1 and 6 threads per rank. If you want to run with this
> setup,
> specify the -ntomp option. But we suggest to change the number of MPI
> ranks.
>
> when I am using -ntomp option ...it is throwing another error
>
> Fatal error:
> Setting the number of thread-MPI ranks is only supported with thread-MPI
> and
> GROMACS was compiled without thread-MPI
>
>
> while GROMACS is compiled with threated-MPI...
>
> plerase help me in this regard.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.