Re: [gmx-users] replica exchange with GROMACS

2019-11-20 Thread David de Sancho
Hi
If I may answer your question, you are missing the name of your run input
file, which should be inside each of these directories. So in case you have
the set of tpr files and they are clalled run.tpr in the correct location,
adding -s run.tpr to the command should do the trick. You can find more
instructions here
http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html
All the best,

David

On Wed, 20 Nov 2019 at 16:35, hind ahmed  wrote:

>
> Hello
> Could you please direct me to how  can I run job of  replica exchange
> with GROMACS.  I used the setting bellow to submit the job on HEC, the job
> works but with no replex exchange and i got the note below.
> #run the md
> Mpirun -np 16 mdrun_mpi_d -multidir rep_0 rep_1 rep_2  rep_3  rep_4
> rep_5  rep_6  rep_7  rep_8  rep_9  rep_10 rep_11 rep_12 rep_13  rep_14
> rep_15 -replex 100
> Is there any guide to how i run it right?
>
> NOTE: The number of threads is not equal to the number of (logical) cores
>
>   and the -pin option is set to auto: will not pin thread to cores.
>   This can lead to significant performance degradation.
>   Consider using -pin on (and -pinoffset in case you run multiple
> jobs).
>
>
> Tthanks
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
David De Sancho <https://sites.google.com/view/daviddesanchoresearch>
Ramón y Cajal Fellow (UPV/EHU)
Donostia International Physics Center
Manuel Lardizabal Ibilbidea, 4
20018 San Sebastian, Spain
Tel: +34 943018527
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] clashes after energy minimization - Ser2 multiple copies

2019-07-30 Thread David de Sancho
Hi all
I have been having trouble setting up a peptide+water simulation system due
to the appearance of clashes in an energy minimization.

Trying to pin down the origin of the problem I have found that a minimum
system that reproduces my problem is a box with multiple copies of an
unblocked serine dipeptide, which I am simulating using a combination of an
Amber force field and TIP3P water. I have created a gist in case anyone
could please help.

https://gist.github.com/daviddesancho/00b23264fed0b6d20ec4e26e7d7810e7

At some point in the minimization, the energy drops asymptotically (also,
the forces become massive, ~1e+07). Surprisingly, this does not occur when
the number of copies of the dipeptide is N=1, 2 or 3. The resulting
minimized structure has a cycle formed by atoms in the C-terminal serine,
and one of the carboxylic oxygens overlaps with the alcoholic hydrogen.

So far, I have been able to reproduce this with different force fields from
the Amber family and the drop in the energy seems easy to rationalize,
based on the parameters. The sidechain hydrogen (HG) and oxygen (OC1) have
opposite charges, while the hydrogen has 0 sigma and epsilon. Hence the
interaction energy of the hydrogen will be zero and the Coulomb term will
be maximized for r_OH=0. No surprise, then that they attract each other.
But why do they clash only when there are 4 molecules around?

I have checked different box sizes and initial configurations to no avail.
Also, adding water and ions to neutralize the box and increase charge
screening does not change the outcome qualitatively (although it makes the
visualization trickier).

As always, all help greatly appreciated,


David
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] decreased performance with free energy

2019-07-18 Thread David de Sancho
Thanks Szilárd
I have posted both in the Gist below for the free energy simulation
https://gist.github.com/daviddesancho/4abdc0d40e2355671ead7f8e40283b57
May it have to do with the number of particles in the box that are affected
by the typeA -> typeB change?

David


Date: Wed, 17 Jul 2019 17:09:21 +0200
> From: Szil?rd P?ll 
> To: Discussion list for GROMACS users 
> Subject: Re: [gmx-users] decreased performance with free energy
> Message-ID:
> <
> cannyew4uszxnnwz56tzbqsjwkt3cu7pf+8hhfxa6nfug0o7...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> Lower performe especially with GPUs is not unexpected, but what you report
> is unusually large. I suggest you post your mdp and log file, perhaps there
> are some things to improve.
>
> --
> Szil?rd
>
>
> On Wed, Jul 17, 2019 at 3:47 PM David de Sancho 
> wrote:
>
> > Hi all
> > I have been doing some testing for Hamiltonian replica exchange using
> > Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
> > box.
> > For the modified hamiltonian I have simply modified the water
> interactions
> > by generating a typeB atom in the force field ffnonbonded.itp with
> > different parameters file and then creating a number of tpr files for
> > different lambda values as defined in the mdp files. The only difference
> > between mdp files for a simple NVT run and for the HREX runs are the
> > following lines:
> >
> > > ; H-REPLEX
> > > free-energy = yes
> > > init-lambda-state = 0
> > > nstdhdl = 0
> > > vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
> >
> > I have tested for performance in the same machine and compared the
> standard
> > NVT run performance (~175 ns/day in 8 cores) with that for the free
> energy
> > tpr file (6.2 ns/day).
> > Is this performance loss what you would expect or are there any immediate
> > changes you can suggest to improve things? I have found a relatively old
> > post on this on Gromacs developers (
> https://redmine.gromacs.org/issues/742
> > ),
> > but I am not sure whether it is the exact same problem.
> > Thanks,
> >
> > David
> > --
> > Gromacs Users mailing list
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] decreased performance with free energy

2019-07-17 Thread David de Sancho
Hi all
I have been doing some testing for Hamiltonian replica exchange using
Gromacs 2018.3 on a relatively simple system with 3000 atoms in a cubic
box.
For the modified hamiltonian I have simply modified the water interactions
by generating a typeB atom in the force field ffnonbonded.itp with
different parameters file and then creating a number of tpr files for
different lambda values as defined in the mdp files. The only difference
between mdp files for a simple NVT run and for the HREX runs are the
following lines:

> ; H-REPLEX
> free-energy = yes
> init-lambda-state = 0
> nstdhdl = 0
> vdw_lambdas = 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

I have tested for performance in the same machine and compared the standard
NVT run performance (~175 ns/day in 8 cores) with that for the free energy
tpr file (6.2 ns/day).
Is this performance loss what you would expect or are there any immediate
changes you can suggest to improve things? I have found a relatively old
post on this on Gromacs developers (https://redmine.gromacs.org/issues/742),
but I am not sure whether it is the exact same problem.
Thanks,

David
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] To: gmx users p

2014-10-25 Thread David de Sancho
http://about-important.co.il/aoxtup/ajhtxjqyyucitvjiphvptfuwdoqlyjbzk.hilbfqlkmmqcsypvjpgs
 
 David de Sancho 


 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] continuation run segmentation fault

2014-07-24 Thread David de Sancho
Dear all
I am having some trouble continuing some runs with Gromacs 4.5.5 in our
local cluster. Surprisingly, the simulations run smoothly in the same
number of nodes and cores before in the same system. And even more
surprisingly if I reduce the number of nodes to 1 with its 12 processors,
then it runs again.

And the script I am using to run the simulations looks something like this@

# Set some Torque options: class name and max time for the job. Torque
 developed from a program called
 # OpenPBS, hence all the PBS references in this file
 #PBS -l nodes=4:ppn=12,walltime=24:00:00

source /home/dd363/src/gromacs-4.5.5/bin/GMXRC.bash
 application=/home/user/src/gromacs-4.5.5/bin/mdrun_openmpi_intel
 options=-s data/tpr/filename.tpr -deffnm data/filename -cpi data/filename

 #! change the working directory (default is home directory)
 cd $PBS_O_WORKDIR
 echo Running on host `hostname`
 echo Time is `date`
 echo Directory is `pwd`
 echo PBS job ID is $PBS_JOBID
 echo This jobs runs on the following machines:
 echo `cat $PBS_NODEFILE | uniq`
 #! Run the parallel MPI executable
 #!export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib64:/usr/lib64
 echo Running mpiexec $application $options
 mpiexec $application $options


And the error messages I am getting look something like this

 [compute-0-11:09645] *** Process received signal ***
 [compute-0-11:09645] Signal: Segmentation fault (11)
 [compute-0-11:09645] Signal code: Address not mapped (1)
 [compute-0-11:09645] Failing at address: 0x10
 [compute-0-11:09643] *** Process received signal ***
 [compute-0-11:09643] Signal: Segmentation fault (11)
 [compute-0-11:09643] Signal code: Address not mapped (1)
 [compute-0-11:09643] Failing at address: 0xd0
 [compute-0-11:09645] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09645] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af2091443f9]
 [compute-0-11:09645] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af209142963]
 [compute-0-11:09645] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2af20996e33c]
 [compute-0-11:09645] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2af20572cfa7]
 [compute-0-11:09645] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2af205219636]
 [compute-0-11:09645] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2259b]
 [compute-0-11:09645] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2a04b]
 [compute-0-11:09645] [ 8]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa22da9]
 [compute-0-11:09645] [ 9]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
 [0x2af205204dcc]
 [compute-0-11:09645] [10]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
 [0x2af205236f0c]
 [compute-0-11:09645] [11]
 /home/dd363/src/gromacs-4.5.5/lib/libgmx_mpi.so.6(gmx_setup_nodecomm+0x14b)
 [0x2af204b8ba6b]
 [compute-0-11:09645] [12]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(mdrunner+0x86c)
 [0x415aac]
 [compute-0-11:09645] [13]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(main+0x1928)
 [0x41d968]
 [compute-0-11:09645] [14] /lib64/libc.so.6(__libc_start_main+0xf4)
 [0x38d281d994]
 [compute-0-11:09643] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09643] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca403f9]
 [compute-0-11:09643] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca3e963]
 [compute-0-11:09643] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2b56ad26a33c]
 [compute-0-11:09643] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2b56a9028fa7]
 [compute-0-11:09643] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2b56a8b15636]
 [compute-0-11:09643] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae31e59b]
 [compute-0-11:09643] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae32604b]
 [compute-0-11:09643] [ 8]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae31eda9]
 [compute-0-11:09643] [ 9]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
 [0x2b56a8b00dcc]
 [compute-0-11:09643] [10]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
 [0x2b56a8b32f0c]
 [compute-0-11:09643] [11]