[gmx-users] ERROR: Failed to load the OptiX shared library.

2019-08-02 Thread Iman Katouzian
Good day,

After the installation of VMD in Linux ubuntu I encounter this error for my
GPU and I yet could not fix it can someone help me fix this issue?
the error is written below :


info) No CUDA accelerator devices available.
Warning) Detected X11 'Composite' extension: if incorrect display occurs
Warning) try disabling this X server option.  Most OpenGL drivers
Warning) disable stereoscopic display when 'Composite' is enabled.
Info) OpenGL renderer: AMD RV710 (DRM 2.50.0 / 4.15.0-55-generic, LLVM
8.0.0)
Info)   Features: STENCIL MSAA(4) MDE CVA MTX NPOT PP PS GLSL(OVFS)
Info)   Full GLSL rendering mode is available.
Info)   Textures: 2-D (8192x8192), 3-D (512x512x512), Multitexture (8)
OptiXRenderer) ERROR: Failed to load the OptiX shared library.
OptiXRenderer)NVIDIA driver may be too old.
OptiXRenderer)Check/update NVIDIA driver
Aborted (core dumped)

Thanks.


-- 

*Iman Katouzian*

*Ph.D.** candidate of Food Process Engineering*

*Faculty of Food Science and Technology*

*University of Agricultural Sciences and Natural Resources, Gorgan, Iran*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] best performance on GPU

2019-08-02 Thread Maryam
Hi Paul
How can I run it on multiple nodes?
Thanks

On Fri., Aug. 2, 2019, 6:10 p.m. Paul Buscemi,  wrote:

> Why run moo on a single node ?
>
> PB
>
> > On Aug 1, 2019, at 5:53 PM, Mark Abraham 
> wrote:
> >
> > Hi,
> >
> > We can't tell whether or what the problem is without more information.
> > Please upload your .log file to a file sharing service and post a link.
> >
> > Mark
> >
> >> On Fri, 2 Aug 2019 at 01:05, Maryam  wrote:
> >>
> >> Dear all
> >> I want to run a simulation of approximately 12000 atoms system in
> gromacs
> >> 2016.6 on GPU with the following machine structure:
> >> Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
> >> support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
> >> instructions: AVX2_256 FFT library:
> >> fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
> >> support: enabled Hwloc support: disabled Tracing support: disabled Built
> >> on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE]
> Build
> >> OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
> >> brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23
> Model: 1
> >> Stepping: 1
> >> Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
> >> 7.5, ECC: no, stat: compatible
> >> i used different commands to get the best performance and i dont know
> which
> >> point i am missing. the quickest time possible is got by this
> command:gmx
> >> mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
> >> which is 10 ns/day! and it takes 2 months to end.
> >> though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin
> on
> >> -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
> >> -append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
> >> with this properties I should be able to run it in  295 ns/day!
> >> could you help me find out what point i am missing that i can not reach
> the
> >> best performance level?
> >> Thank you
> >> --
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation on 2 gpus

2019-08-02 Thread Paul Buscemi
I run the same system and setup but no nvlink. Maestro runs both gpus at 100 
percent. Gromacs typically 50 --60 percent can do 600ns/d on 2 atoms 

PB

> On Jul 25, 2019, at 9:30 PM, Kevin Boyd  wrote:
> 
> Hi,
> 
> I've done a lot of research/experimentation on this, so I can maybe get you
> started - if anyone has any questions about the essay to follow, feel free
> to email me personally, and I'll link it to the email thread if it ends up
> being pertinent.
> 
> First, there's some more internet resources to checkout. See Mark's talk at
> -
> https://bioexcel.eu/webinar-performance-tuning-and-optimization-of-gromacs/
> Gromacs development moves fast, but a lot of it is still relevant.
> 
> I'll expand a bit here, with the caveat that Gromacs GPU development is
> moving very fast and so the correct commands for optimal performance are
> both system-dependent and a moving target between versions. This is a good
> thing - GPUs have revolutionized the field, and with each iteration we make
> better use of them. The downside is that it's unclear exactly what sort of
> CPU-GPU balance you should look to purchase to take advantage of future
> developments, though the trend is certainly that more and more computation
> is being offloaded to the GPUs.
> 
> The most important consideration is that to get maximum total throughput
> performance, you should be running not one but multiple simulations
> simultaneously. You can do this through the -multidir option, but I don't
> recommend that in this case, as it requires compiling with MPI and limits
> some of your options. My run scripts usually use "gmx mdrun ... &" to
> initiate subprocesses, with combinations of -ntomp, -ntmpi, -pin
> -pinoffset, and -gputasks. I can give specific examples if you're
> interested.
> 
> Another important point is that you can run more simulations than the
> number of GPUs you have. Depending on CPU-GPU balance and quality, you
> won't double your throughput by e.g. putting 4 simulations on 2 GPUs, but
> you might increase it up to 1.5x. This would involve targeting the same GPU
> with -gputasks.
> 
> Within a simulation, you should set up a benchmarking script to figure out
> the best combination of thread-mpi ranks and open-mp threads - this can
> have pretty drastic effects on performance. For example, if you want to use
> your entire machine for one simulation (not recommended for maximal

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] best performance on GPU

2019-08-02 Thread Paul Buscemi
Why run moo on a single node ?

PB

> On Aug 1, 2019, at 5:53 PM, Mark Abraham  wrote:
> 
> Hi,
> 
> We can't tell whether or what the problem is without more information.
> Please upload your .log file to a file sharing service and post a link.
> 
> Mark
> 
>> On Fri, 2 Aug 2019 at 01:05, Maryam  wrote:
>> 
>> Dear all
>> I want to run a simulation of approximately 12000 atoms system in gromacs
>> 2016.6 on GPU with the following machine structure:
>> Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
>> support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
>> instructions: AVX2_256 FFT library:
>> fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
>> support: enabled Hwloc support: disabled Tracing support: disabled Built
>> on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE] Build
>> OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
>> brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23 Model: 1
>> Stepping: 1
>> Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
>> 7.5, ECC: no, stat: compatible
>> i used different commands to get the best performance and i dont know which
>> point i am missing. the quickest time possible is got by this command:gmx
>> mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
>> which is 10 ns/day! and it takes 2 months to end.
>> though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin on
>> -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
>> -append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
>> with this properties I should be able to run it in  295 ns/day!
>> could you help me find out what point i am missing that i can not reach the
>> best performance level?
>> Thank you
>> --
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Paul Buscemi
Run with a maxwarn 1. If it runs then there is a deeper problem. If does not go 
it's probably a typo. Bet it's the latter

PB

> On Aug 1, 2019, at 2:52 PM, Justin Lemkul  wrote:
> 
> 
> 
>> On 8/1/19 3:50 PM, Mohammed I Sorour wrote:
>> Dear Gromacs users,
>> 
>> I'm running MD simulation on a couple of DNA systems that only vary in
>> sequence. Most of the runs worked just fine, but surprisingly I have one
>> system that I got an error in the NVT equilibration step.
>> I'm following the
>> tutorialhttp://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/lysozyme/06_equil.html
>> 
>> rogram: gmx mdrun, version 2016.3
>> Source file: src/gromacs/options/options.cpp (line 258)
>> Function:void gmx::Options::finish()
>> 
>> Error in user input:
>> Invalid input values
>>   In option s
>> Required option was not provided, and the default file 'topol' does not
>> exist or is not accessible.
>> The following extensions were tried to complete the file name:
>>   .tpr
>> 
>> I'm pretty sure that I have the .tpr  files in the local directory. I have
>> read the previous reviews of the Gromacs mailing list, and I know that
>> it would be a problem with the toplogy file. The toplogy files looks
>> so far good to me.
> 
> There's a typo in your command or the input file you think is there is not. 
> You didn't provide your mdrun command (please always do this) but I suspect 
> the former. If mdrun does not find the file you specify, it looks for the 
> default file name, which is topol.tpr. That's also not there, so you get a 
> fatal error.
> 
>> Here is the only thing I can susbect, but I don't know if this is the
>> cause, but also I'm still wondering why: So When I generated my ssytem
>> topology using pdb2gmx
>> 
>> 
>> "Now there are 3969 atoms and 124 residues

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Mohammed I Sorour
Thank you so much, Carlos, using the full path helped me figure out the
problem.


So it turned out to be a problem with the path.

When using cd ${PBS_O_WORKDIR}/
it executes the job in my local directory --->
work/md/sequence_8_md/equilibration /nvt (When I made the directory
equilibration, I added a space at the end by mistake),

Renaming the equilibration directory and removing that space >
 work/md/sequence_8_md/equilibration/nvt (That solved the problem and
the calculation is running)


Thank you so much, everyone!!

On Fri, Aug 2, 2019 at 11:34 AM Carlos Navarro 
wrote:

> Did you try replacing the line
> cd ${PBS_O_WORKDIR}/
> with
> cd ‘YOUR_CURRENT_PATH’/
> ?
> Maybe as people already pointed out, the variable is not working properly.
> Maybe this could work.
> Best,
>
> ——
> Carlos Navarro Retamal
> Bioinformatic Engineering. PhD.
> Postdoctoral Researcher in Center of Bioinformatics and Molecular
> Simulations
> Universidad de Talca
> Av. Lircay S/N, Talca, Chile
> E: carlos.navarr...@gmail.com or cnava...@utalca.cl
>
> On August 2, 2019 at 5:20:33 PM, Mohammed I Sorour (
> mohammed.sor...@temple.edu) wrote:
>
> Yeah, I deeply appreciate your help. But any idea/recommendation why the
> same command doesn't work through the script? I can't run any jobs,
> especially such a big calculation, out of the queue system.
>
> On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul  wrote:
>
> >
> >
> > On 8/2/19 11:09 AM, Mohammed I Sorour wrote:
> > > This is the output
> > >
> > >
> > > Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
> > > Job ID: 341185
> > > Username: tuf73544
> > > Group: chem
> > > Job Name: NVT
> > > Session: 45173
> > > Limits: walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
> > > Resources:
> > >
> >
> cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
> > > Queue: normal
> > > Account:
> > > Deleting /dev/shm/*...
> > > 
> > > End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
> > > 
> > > Command line:
> > > gmx mdrun -deffnm nvt
> > >
> > >
> > > Running on 1 node with total 28 cores, 28 logical cores
> > > Hardware detected:
> > > CPU info:
> > > Vendor: Intel
> > > Brand: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> > > SIMD instructions most likely to fit this hardware: AVX2_256
> > > SIMD instructions selected at GROMACS compile time: SSE4.1
> > >
> > > Hardware topology: Full, with devices
> > >
> > > Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
> > > machine, which is better.
> > >
> > > *Reading file nvt.tpr, VERSION 2016.3 (single precision)*
> > > Changing nstlist from 10 to 20, rlist from 1 to 1.029
> > >
> > > The number of OpenMP threads was set by environment variable
> > > OMP_NUM_THREADS to 1
> > >
> > > Will use 24 particle-particle and 4 PME only ranks
> > > This is a guess, check the performance at the end of the log file
> > > Using 28 MPI threads
> > > Using 1 OpenMP thread per tMPI thread
> > >
> > > starting mdrun 'DNA in water'
> > > 50 steps, 1000.0 ps.
> > >
> > > step 40 Turning on dynamic load balancing, because the performance
> loss
> > due
> > > to load imbalance is 4.1 %.
> > >
> > >
> > > Writing final coordinates.
> > >
> > > Average load imbalance: 1.5 %
> > > Part of the total run time spent waiting due to load imbalance: 1.2 %
> > > Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
> > X 0
> > > % Y 0 % Z 0 %
> > > Average PME mesh/force load: 0.752
> > > Part of the total run time spent waiting due to PP/PME imbalance: 3.1
> %
> > >
> > >
> > > Core t (s) Wall t (s) (%)
> > > Time: 71996.002 2571.286 2800.0
> > > 42:51
> > > (ns/day) (hour/ns)
> > > Performance: 33.602 0.714
> >
> > This output indicates that the job finished successfully and did not
> > produce the original error you posted.
> >
> > -Justin
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Assistant Professor
> > Office: 301 Fralin Hall
> > Lab: 303 Engel Hall
> >
> > Virginia Tech Department of Biochemistry
> > 340 West Campus Dr.
> > Blacksburg, VA 24061
> >
> > jalem...@vt.edu | (540) 231-3129
> > http://www.thelemkullab.com
> >
> > ==
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> 

Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Carlos Navarro
Did you try replacing the line
cd ${PBS_O_WORKDIR}/
with
cd ‘YOUR_CURRENT_PATH’/
?
Maybe as people already pointed out, the variable is not working properly.
Maybe this could work.
Best,

——
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarr...@gmail.com or cnava...@utalca.cl

On August 2, 2019 at 5:20:33 PM, Mohammed I Sorour (
mohammed.sor...@temple.edu) wrote:

Yeah, I deeply appreciate your help. But any idea/recommendation why the
same command doesn't work through the script? I can't run any jobs,
especially such a big calculation, out of the queue system.

On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul  wrote:

>
>
> On 8/2/19 11:09 AM, Mohammed I Sorour wrote:
> > This is the output
> >
> >
> > Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
> > Job ID: 341185
> > Username: tuf73544
> > Group: chem
> > Job Name: NVT
> > Session: 45173
> > Limits: walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
> > Resources:
> >
> cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
> > Queue: normal
> > Account:
> > Deleting /dev/shm/*...
> > 
> > End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
> > 
> > Command line:
> > gmx mdrun -deffnm nvt
> >
> >
> > Running on 1 node with total 28 cores, 28 logical cores
> > Hardware detected:
> > CPU info:
> > Vendor: Intel
> > Brand: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> > SIMD instructions most likely to fit this hardware: AVX2_256
> > SIMD instructions selected at GROMACS compile time: SSE4.1
> >
> > Hardware topology: Full, with devices
> >
> > Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
> > machine, which is better.
> >
> > *Reading file nvt.tpr, VERSION 2016.3 (single precision)*
> > Changing nstlist from 10 to 20, rlist from 1 to 1.029
> >
> > The number of OpenMP threads was set by environment variable
> > OMP_NUM_THREADS to 1
> >
> > Will use 24 particle-particle and 4 PME only ranks
> > This is a guess, check the performance at the end of the log file
> > Using 28 MPI threads
> > Using 1 OpenMP thread per tMPI thread
> >
> > starting mdrun 'DNA in water'
> > 50 steps, 1000.0 ps.
> >
> > step 40 Turning on dynamic load balancing, because the performance loss
> due
> > to load imbalance is 4.1 %.
> >
> >
> > Writing final coordinates.
> >
> > Average load imbalance: 1.5 %
> > Part of the total run time spent waiting due to load imbalance: 1.2 %
> > Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
> X 0
> > % Y 0 % Z 0 %
> > Average PME mesh/force load: 0.752
> > Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %
> >
> >
> > Core t (s) Wall t (s) (%)
> > Time: 71996.002 2571.286 2800.0
> > 42:51
> > (ns/day) (hour/ns)
> > Performance: 33.602 0.714
>
> This output indicates that the job finished successfully and did not
> produce the original error you posted.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Gromacs 4.5.7 compatibility with Titan X GPU

2019-08-02 Thread Timothy Hurlburt
Hi,
I am trying to install GPU accelerated Gromacs 4.5.7 for use with implicit
solvent.
I am using an Nividia GM200 [GeForce GTX TITAN X] GPU.

When I tried to install Gromacs with CUDA toolkit 3.1 and OpenMM 2.0 I get
this error: "SetSim copy to cSim failed invalid device symbol openMM".
-Based on this discussion
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2013-January/077622.html
I presumed toolkit 3.1 is not compatible with my gpu so I tried toolkit
7.5.

I installed Gromacs with CUDA toolkit 7.5 and OpenMM 2.0 without fatal
errors. However when I tried mdrun-gpu I got this fatal error: "The
requested platform "CUDA" could not be found."

I ran these commands
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64"
export CUDA_HOME=/usr/local/cuda
mdrun-gpu -s run.tpr -deffnm run -v

Then I got this error message
Fatal error:
The requested platform "CUDA" could not be found.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

I am not sure if my GPU is incompatible or whether I am missing some flags.
Any help would be greatly appreciated.

Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Mohammed I Sorour
Sounds great, thank you so much!!

On Fri, Aug 2, 2019 at 11:21 AM Justin Lemkul  wrote:

>
>
> On 8/2/19 11:18 AM, Mohammed I Sorour wrote:
> > Yeah, I deeply appreciate your help. But any idea/recommendation why the
> > same command doesn't work through the script? I can't run any jobs,
> > especially such a big calculation, out of the queue system.
>
> See my previous message about checking the working directory. Beyond
> that, this is not a GROMACS problem - your system runs fine. Your
> sysadmins are the people to ask about how to run jobs on your cluster.
>
> -Justin
>
> > On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul  wrote:
> >
> >>
> >> On 8/2/19 11:09 AM, Mohammed I Sorour wrote:
> >>> This is the output
> >>>
> >>>
> >>> Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
> >>> Job ID:   341185
> >>> Username: tuf73544
> >>> Group:chem
> >>> Job Name: NVT
> >>> Session:  45173
> >>> Limits:   walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
> >>> Resources:
> >>>
> >>
>  cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
> >>> Queue:normal
> >>> Account:
> >>> Deleting /dev/shm/*...
> >>> 
> >>> End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
> >>> 
> >>> Command line:
> >>> gmx mdrun -deffnm nvt
> >>>
> >>>
> >>> Running on 1 node with total 28 cores, 28 logical cores
> >>> Hardware detected:
> >>> CPU info:
> >>>   Vendor: Intel
> >>>   Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> >>>   SIMD instructions most likely to fit this hardware: AVX2_256
> >>>   SIMD instructions selected at GROMACS compile time: SSE4.1
> >>>
> >>> Hardware topology: Full, with devices
> >>>
> >>> Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
> >>> machine, which is better.
> >>>
> >>> *Reading file nvt.tpr, VERSION 2016.3 (single precision)*
> >>> Changing nstlist from 10 to 20, rlist from 1 to 1.029
> >>>
> >>> The number of OpenMP threads was set by environment variable
> >>> OMP_NUM_THREADS to 1
> >>>
> >>> Will use 24 particle-particle and 4 PME only ranks
> >>> This is a guess, check the performance at the end of the log file
> >>> Using 28 MPI threads
> >>> Using 1 OpenMP thread per tMPI thread
> >>>
> >>> starting mdrun 'DNA in water'
> >>> 50 steps,   1000.0 ps.
> >>>
> >>> step 40 Turning on dynamic load balancing, because the performance loss
> >> due
> >>> to load imbalance is 4.1 %.
> >>>
> >>>
> >>> Writing final coordinates.
> >>>
> >>>Average load imbalance: 1.5 %
> >>>Part of the total run time spent waiting due to load imbalance: 1.2
> %
> >>>Steps where the load balancing was limited by -rdd, -rcon and/or
> -dds:
> >> X 0
> >>> % Y 0 % Z 0 %
> >>>Average PME mesh/force load: 0.752
> >>>Part of the total run time spent waiting due to PP/PME imbalance:
> 3.1 %
> >>>
> >>>
> >>>  Core t (s)   Wall t (s)(%)
> >>>  Time:71996.002 2571.286 2800.0
> >>>42:51
> >>>(ns/day)(hour/ns)
> >>> Performance:   33.6020.714
> >> This output indicates that the job finished successfully and did not
> >> produce the original error you posted.
> >>
> >> -Justin
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* 

Re: [gmx-users] Purpose of repeated improper dihedrals

2019-08-02 Thread Dawid das
OK, I see. Thanks again!

pt., 2 sie 2019 o 17:07 Justin Lemkul  napisał(a):

>
>
> On 8/2/19 8:59 AM, Dawid das wrote:
> > Thank you for an asnwer. However, I still need to ask more.
> > So, let's take this time the
> > NR1  CPH1  CPH2  H
> > NR1  CPH2  CPH1  H
> >
> > example. For such quartets,  the improper dihedral parameters are defined
> > for HSD residue.
> > Now, the bonding is as follows in part of HSD:
> >H
> >|
> >  NR1
> >/\
> >   CPH1CPH2
> >
> > In my top file generated with pdb2gmx I can see the improper dihedral
> > entries for both
> > NR1  CPH1  CPH2  H
> > NR1  CPH2  CPH1  H
> > Also after dumping the tpr file I can see IDIHS entries for both
> > arrangement of atoms for the same molecule.
> >
> > According to Gromacs manual, the entry for improper dihedral i j k l is
> > understood as follows
> >  l
> >  |
> >  i
> >/   \
> >  j  k
> >
> > So maybe I'm blind but I still don't really understand  the purpose of
> > changing the middle atom types
> > if there is in fact only one way these four atoms are connected in HSD.
> The
> > only reason I see is to
> > make it more rigid because I have "doubled" improper.
>
> Again, I would not call it a "doubled" parameter, because they are
> clearly different in atom order. This usually implies a difference in
> stereochemistry; in this case, I don't know the history of why the
> impropers are assigned this way, but going back to the original CHARMM
> topologies there are in fact two impropers assigned around ND1, CD2, and
> CE1. The use of two terms with the same parameters but different
> connectivities is probably due to having to avoid some kind of
> asymmetry, though it seems unusual. Regardless, it is a faithful (and
> correct) representation of the force field and we have tested it for
> agreement between CHARMM and GROMACS.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Justin Lemkul




On 8/2/19 11:18 AM, Mohammed I Sorour wrote:

Yeah, I deeply appreciate your help. But any idea/recommendation why the
same command doesn't work through the script? I can't run any jobs,
especially such a big calculation, out of the queue system.


See my previous message about checking the working directory. Beyond 
that, this is not a GROMACS problem - your system runs fine. Your 
sysadmins are the people to ask about how to run jobs on your cluster.


-Justin


On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul  wrote:



On 8/2/19 11:09 AM, Mohammed I Sorour wrote:

This is the output


Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
Job ID:   341185
Username: tuf73544
Group:chem
Job Name: NVT
Session:  45173
Limits:   walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
Resources:


  cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0

Queue:normal
Account:
Deleting /dev/shm/*...

End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019

Command line:
gmx mdrun -deffnm nvt


Running on 1 node with total 28 cores, 28 logical cores
Hardware detected:
CPU info:
  Vendor: Intel
  Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  SIMD instructions most likely to fit this hardware: AVX2_256
  SIMD instructions selected at GROMACS compile time: SSE4.1

Hardware topology: Full, with devices

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
machine, which is better.

*Reading file nvt.tpr, VERSION 2016.3 (single precision)*
Changing nstlist from 10 to 20, rlist from 1 to 1.029

The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 1

Will use 24 particle-particle and 4 PME only ranks
This is a guess, check the performance at the end of the log file
Using 28 MPI threads
Using 1 OpenMP thread per tMPI thread

starting mdrun 'DNA in water'
50 steps,   1000.0 ps.

step 40 Turning on dynamic load balancing, because the performance loss

due

to load imbalance is 4.1 %.


Writing final coordinates.

   Average load imbalance: 1.5 %
   Part of the total run time spent waiting due to load imbalance: 1.2 %
   Steps where the load balancing was limited by -rdd, -rcon and/or -dds:

X 0

% Y 0 % Z 0 %
   Average PME mesh/force load: 0.752
   Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %


 Core t (s)   Wall t (s)(%)
 Time:71996.002 2571.286 2800.0
   42:51
   (ns/day)(hour/ns)
Performance:   33.6020.714

This output indicates that the job finished successfully and did not
produce the original error you posted.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Mohammed I Sorour
Yeah, I deeply appreciate your help. But any idea/recommendation why the
same command doesn't work through the script? I can't run any jobs,
especially such a big calculation, out of the queue system.

On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul  wrote:

>
>
> On 8/2/19 11:09 AM, Mohammed I Sorour wrote:
> > This is the output
> >
> >
> > Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
> > Job ID:   341185
> > Username: tuf73544
> > Group:chem
> > Job Name: NVT
> > Session:  45173
> > Limits:   walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
> > Resources:
> >
>  cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
> > Queue:normal
> > Account:
> > Deleting /dev/shm/*...
> > 
> > End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
> > 
> > Command line:
> >gmx mdrun -deffnm nvt
> >
> >
> > Running on 1 node with total 28 cores, 28 logical cores
> > Hardware detected:
> >CPU info:
> >  Vendor: Intel
> >  Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> >  SIMD instructions most likely to fit this hardware: AVX2_256
> >  SIMD instructions selected at GROMACS compile time: SSE4.1
> >
> >Hardware topology: Full, with devices
> >
> > Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
> > machine, which is better.
> >
> > *Reading file nvt.tpr, VERSION 2016.3 (single precision)*
> > Changing nstlist from 10 to 20, rlist from 1 to 1.029
> >
> > The number of OpenMP threads was set by environment variable
> > OMP_NUM_THREADS to 1
> >
> > Will use 24 particle-particle and 4 PME only ranks
> > This is a guess, check the performance at the end of the log file
> > Using 28 MPI threads
> > Using 1 OpenMP thread per tMPI thread
> >
> > starting mdrun 'DNA in water'
> > 50 steps,   1000.0 ps.
> >
> > step 40 Turning on dynamic load balancing, because the performance loss
> due
> > to load imbalance is 4.1 %.
> >
> >
> > Writing final coordinates.
> >
> >   Average load imbalance: 1.5 %
> >   Part of the total run time spent waiting due to load imbalance: 1.2 %
> >   Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
> X 0
> > % Y 0 % Z 0 %
> >   Average PME mesh/force load: 0.752
> >   Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %
> >
> >
> > Core t (s)   Wall t (s)(%)
> > Time:71996.002 2571.286 2800.0
> >   42:51
> >   (ns/day)(hour/ns)
> > Performance:   33.6020.714
>
> This output indicates that the job finished successfully and did not
> produce the original error you posted.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Justin Lemkul




On 8/2/19 11:09 AM, Mohammed I Sorour wrote:

This is the output


Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
Job ID:   341185
Username: tuf73544
Group:chem
Job Name: NVT
Session:  45173
Limits:   walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
Resources:
  cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
Queue:normal
Account:
Deleting /dev/shm/*...

End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019

Command line:
   gmx mdrun -deffnm nvt


Running on 1 node with total 28 cores, 28 logical cores
Hardware detected:
   CPU info:
 Vendor: Intel
 Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
 SIMD instructions most likely to fit this hardware: AVX2_256
 SIMD instructions selected at GROMACS compile time: SSE4.1

   Hardware topology: Full, with devices

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
machine, which is better.

*Reading file nvt.tpr, VERSION 2016.3 (single precision)*
Changing nstlist from 10 to 20, rlist from 1 to 1.029

The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 1

Will use 24 particle-particle and 4 PME only ranks
This is a guess, check the performance at the end of the log file
Using 28 MPI threads
Using 1 OpenMP thread per tMPI thread

starting mdrun 'DNA in water'
50 steps,   1000.0 ps.

step 40 Turning on dynamic load balancing, because the performance loss due
to load imbalance is 4.1 %.


Writing final coordinates.

  Average load imbalance: 1.5 %
  Part of the total run time spent waiting due to load imbalance: 1.2 %
  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
% Y 0 % Z 0 %
  Average PME mesh/force load: 0.752
  Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %


Core t (s)   Wall t (s)(%)
Time:71996.002 2571.286 2800.0
  42:51
  (ns/day)(hour/ns)
Performance:   33.6020.714


This output indicates that the job finished successfully and did not 
produce the original error you posted.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] best performance on GPU

2019-08-02 Thread Maryam
Hi Mark
here is the link to md.log:
https://www.dropbox.com/s/4fuu5g68nwwzys4/MD.log?dl=0
Thank you!


On Thu, Aug 1, 2019 at 6:54 PM Mark Abraham 
wrote:

> Hi,
>
> We can't tell whether or what the problem is without more information.
> Please upload your .log file to a file sharing service and post a link.
>
> Mark
>
> On Fri, 2 Aug 2019 at 01:05, Maryam  wrote:
>
> > Dear all
> > I want to run a simulation of approximately 12000 atoms system in gromacs
> > 2016.6 on GPU with the following machine structure:
> > Precision: single Memory model: 64 bit MPI library: thread_mpi OpenMP
> > support: enabled (GMX_OPENMP_MAX_THREADS = 32) GPU support: CUDA SIMD
> > instructions: AVX2_256 FFT library:
> > fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512 RDTSCP usage: enabled TNG
> > support: enabled Hwloc support: disabled Tracing support: disabled Built
> > on: Fri Jun 21 09:58:11 EDT 2019 Built by: julian@BioServer [CMAKE]
> Build
> > OS/arch: Linux 4.15.0-52-generic x86_64 Build CPU vendor: AMD Build CPU
> > brand: AMD Ryzen 7 1800X Eight-Core Processor Build CPU family: 23
> Model: 1
> > Stepping: 1
> > Number of GPUs detected: 1 #0: NVIDIA GeForce RTX 2080 Ti, compute cap.:
> > 7.5, ECC: no, stat: compatible
> > i used different commands to get the best performance and i dont know
> which
> > point i am missing. the quickest time possible is got by this command:gmx
> > mdrun -s md.tpr -nb gpu -deffnm MD -tunepme -v
> > which is 10 ns/day! and it takes 2 months to end.
> >  though i used several commands to tune it like: gmx mdrun -ntomp 6 -pin
> on
> > -resethway -nstlist 20 -s md.tpr -deffnm md -cpi md.cpt -tunepme -cpt 15
> > -append -gpu_id 0 -nb auto.  In the gromacs website it is mentioned that
> > with this properties I should be able to run it in  295 ns/day!
> > could you help me find out what point i am missing that i can not reach
> the
> > best performance level?
> > Thank you
> > --
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Mohammed I Sorour
This is the output


Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
Job ID:   341185
Username: tuf73544
Group:chem
Job Name: NVT
Session:  45173
Limits:   walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
Resources:
 cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
Queue:normal
Account:
Deleting /dev/shm/*...

End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019

Command line:
  gmx mdrun -deffnm nvt


Running on 1 node with total 28 cores, 28 logical cores
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: SSE4.1

  Hardware topology: Full, with devices

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
machine, which is better.

*Reading file nvt.tpr, VERSION 2016.3 (single precision)*
Changing nstlist from 10 to 20, rlist from 1 to 1.029

The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 1

Will use 24 particle-particle and 4 PME only ranks
This is a guess, check the performance at the end of the log file
Using 28 MPI threads
Using 1 OpenMP thread per tMPI thread

starting mdrun 'DNA in water'
50 steps,   1000.0 ps.

step 40 Turning on dynamic load balancing, because the performance loss due
to load imbalance is 4.1 %.


Writing final coordinates.

 Average load imbalance: 1.5 %
 Part of the total run time spent waiting due to load imbalance: 1.2 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
% Y 0 % Z 0 %
 Average PME mesh/force load: 0.752
 Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %


   Core t (s)   Wall t (s)(%)
   Time:71996.002 2571.286 2800.0
 42:51
 (ns/day)(hour/ns)
Performance:   33.6020.714

On Fri, Aug 2, 2019 at 11:00 AM John Whittaker <
johnwhitt...@zedat.fu-berlin.de> wrote:

> > Hi Justin,
> >
> > Yes, I'm using a queuing system with a submission script.
> >
> > #-l nodes=1:ppn=16
> >
> > #PBS -l walltime=10:00:00
> >
> > #PBS ­-qmedium
> >
> > #PBS -N NVT
> >
> > #PBS -e out.err
> >
> > #PBS -o out
> >
> >
> >
> > module load gromacs
> >
> >
> > cd ${PBS_O_WORKDIR}/;
> >
> >
> > gmx mdrun -deffnm nvt
> >
> >
> > Well, based on your hint, I executed a trial mdrun job without using the
> > queue. It seems to be working well and reading the .tpr file, I had to
> > kill
> > the job once I made sure that it is reading the tpr file due to the
> > regulations of using the cluster out of the queue.
> > So, I should suspect that there is something wrong with executing the
> > script. It is worthy to note that I used the same script without any kind
> > of change multiple times and it's working well. This's more confusing for
> > me now, any hints?
>
> What is the output from the cluster? I'm guessing the output is in the
> file called "out" that's created each time the simulation fails.
>
>
> >
> > Thanks for the ionization/neutralization advice; surely I did that.
> >
> > Thanks,
> > Mohammed
> >
> > On Thu, Aug 1, 2019 at 9:22 PM Justin Lemkul  wrote:
> >
> >>
> >>
> >> On 8/1/19 7:04 PM, Mohammed I Sorour wrote:
> >> >> Hi,
> >> >
> >> > that's what the ls -l prints,
> >> >
> >> >
> >> >
> >> >
> >> >> ls -l
> >> >> total 193120
> >> >> drwxr-xr-x 2 tuf73544 chem 4096 Jul 26 17:20 amber99sb_dyes.ff
> >> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:56 em.gro
> >> >> -rw-r--r-- 1 tuf73544 chem  191 Aug  1 15:04
> >> equilibration_NVT_script
> >> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:11 full_solv_ions.gro
> >> >> -rw-r--r-- 1 tuf73544 chem 2164 Jul  8  2016 ions.itp
> >> >> -rw-r--r-- 1 tuf73544 chem 33866956 Jul 31 19:08 ions.tpr
> >> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:57 mdout.mdp
> >> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:56 #mdout.mdp.1#
> >> >> -rw-r--r-- 1 tuf73544 chem 1875 Jul 27 08:22 nvt.mdp
> >> >> -rw-r--r-- 1 tuf73544 chem 39455312 Aug  1 14:57 nvt.tpr
> >> >> -rw--- 1 tuf73544 chem 1032 Aug  1 15:04 out
> >> >> -rw--- 1 tuf73544 chem 2395 Aug  1 15:04 out.err
> >> >> -rw-r--r-- 1 tuf73544 chem38899 Jul 31 18:58
> >> posre_DNA_chain_A.itp
> >> >> -rw-r--r-- 1 tuf73544 chem39953 Jul 31 18:58
> >> posre_DNA_chain_B.itp
> >> >> -rw-r--r-- 1 tuf73544 chem 3215 Aug  1 14:57 residuetypes.dat
> >> >> -rw-r--r-- 1 tuf73544 chem 4873 Jul 18 13:03 specbond.dat
> >> >> -rw-r--r-- 1 tuf73544 chem69176 Jul 16 11:40 tip3p.gro
> >> >> -rw-r--r-- 1 tuf73544 chem   588482 Jul 31 18:58
> >> topol_DNA_chain_A.itp
> >> >> -rw-r--r-- 1 tuf73544 chem   589283 Jul 31 18:58
> >> topol_DNA_chain_B.itp
> >> >> -rw--- 1 tuf73544 chem 1264 Jul 31 19:10 topol.top
> >> 

Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Justin Lemkul



On 8/2/19 10:49 AM, Mohammed I Sorour wrote:

Hi Justin,

Yes, I'm using a queuing system with a submission script.

#-l nodes=1:ppn=16

#PBS -l walltime=10:00:00

#PBS ­-qmedium

#PBS -N NVT

#PBS -e out.err

#PBS -o out



module load gromacs


cd ${PBS_O_WORKDIR}/;


Can you verify that this command is putting you in the directory you 
think? The environment variable points to the directory from which the 
job was submitted, so if your submission script and input files are not 
in the same directory, you're telling the queue to move into 
$PBS_O_WORKDIR, where it fails to find nvt.tpr


-Justin



gmx mdrun -deffnm nvt


Well, based on your hint, I executed a trial mdrun job without using the
queue. It seems to be working well and reading the .tpr file, I had to kill
the job once I made sure that it is reading the tpr file due to the
regulations of using the cluster out of the queue.
So, I should suspect that there is something wrong with executing the
script. It is worthy to note that I used the same script without any kind
of change multiple times and it's working well. This's more confusing for
me now, any hints?

Thanks for the ionization/neutralization advice; surely I did that.

Thanks,
Mohammed

On Thu, Aug 1, 2019 at 9:22 PM Justin Lemkul  wrote:



On 8/1/19 7:04 PM, Mohammed I Sorour wrote:

Hi,

that's what the ls -l prints,





ls -l
total 193120
drwxr-xr-x 2 tuf73544 chem 4096 Jul 26 17:20 amber99sb_dyes.ff
-rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:56 em.gro
-rw-r--r-- 1 tuf73544 chem  191 Aug  1 15:04

equilibration_NVT_script

-rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:11 full_solv_ions.gro
-rw-r--r-- 1 tuf73544 chem 2164 Jul  8  2016 ions.itp
-rw-r--r-- 1 tuf73544 chem 33866956 Jul 31 19:08 ions.tpr
-rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:57 mdout.mdp
-rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:56 #mdout.mdp.1#
-rw-r--r-- 1 tuf73544 chem 1875 Jul 27 08:22 nvt.mdp
-rw-r--r-- 1 tuf73544 chem 39455312 Aug  1 14:57 nvt.tpr
-rw--- 1 tuf73544 chem 1032 Aug  1 15:04 out
-rw--- 1 tuf73544 chem 2395 Aug  1 15:04 out.err
-rw-r--r-- 1 tuf73544 chem38899 Jul 31 18:58 posre_DNA_chain_A.itp
-rw-r--r-- 1 tuf73544 chem39953 Jul 31 18:58 posre_DNA_chain_B.itp
-rw-r--r-- 1 tuf73544 chem 3215 Aug  1 14:57 residuetypes.dat
-rw-r--r-- 1 tuf73544 chem 4873 Jul 18 13:03 specbond.dat
-rw-r--r-- 1 tuf73544 chem69176 Jul 16 11:40 tip3p.gro
-rw-r--r-- 1 tuf73544 chem   588482 Jul 31 18:58 topol_DNA_chain_A.itp
-rw-r--r-- 1 tuf73544 chem   589283 Jul 31 18:58 topol_DNA_chain_B.itp
-rw--- 1 tuf73544 chem 1264 Jul 31 19:10 topol.top
drwxr-xr-x 3 tuf73544 chem 4096 Aug  1 14:18 trial

Are you executing mdrun interactively, or via some kind of queuing
system with a submission script?

Also you should *not* be running dynamics on a system with such a net
charge. Add salt and neutralize! It's not the source of your problem,
though.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Purpose of repeated improper dihedrals

2019-08-02 Thread Justin Lemkul




On 8/2/19 8:59 AM, Dawid das wrote:

Thank you for an asnwer. However, I still need to ask more.
So, let's take this time the
NR1  CPH1  CPH2  H
NR1  CPH2  CPH1  H

example. For such quartets,  the improper dihedral parameters are defined
for HSD residue.
Now, the bonding is as follows in part of HSD:
   H
   |
 NR1
   /\
  CPH1CPH2

In my top file generated with pdb2gmx I can see the improper dihedral
entries for both
NR1  CPH1  CPH2  H
NR1  CPH2  CPH1  H
Also after dumping the tpr file I can see IDIHS entries for both
arrangement of atoms for the same molecule.

According to Gromacs manual, the entry for improper dihedral i j k l is
understood as follows
 l
 |
 i
   /   \
 j  k

So maybe I'm blind but I still don't really understand  the purpose of
changing the middle atom types
if there is in fact only one way these four atoms are connected in HSD. The
only reason I see is to
make it more rigid because I have "doubled" improper.


Again, I would not call it a "doubled" parameter, because they are 
clearly different in atom order. This usually implies a difference in 
stereochemistry; in this case, I don't know the history of why the 
impropers are assigned this way, but going back to the original CHARMM 
topologies there are in fact two impropers assigned around ND1, CD2, and 
CE1. The use of two terms with the same parameters but different 
connectivities is probably due to having to avoid some kind of 
asymmetry, though it seems unusual. Regardless, it is a faithful (and 
correct) representation of the force field and we have tested it for 
agreement between CHARMM and GROMACS.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread John Whittaker
> Hi Justin,
>
> Yes, I'm using a queuing system with a submission script.
>
> #-l nodes=1:ppn=16
>
> #PBS -l walltime=10:00:00
>
> #PBS ­-qmedium
>
> #PBS -N NVT
>
> #PBS -e out.err
>
> #PBS -o out
>
>
>
> module load gromacs
>
>
> cd ${PBS_O_WORKDIR}/;
>
>
> gmx mdrun -deffnm nvt
>
>
> Well, based on your hint, I executed a trial mdrun job without using the
> queue. It seems to be working well and reading the .tpr file, I had to
> kill
> the job once I made sure that it is reading the tpr file due to the
> regulations of using the cluster out of the queue.
> So, I should suspect that there is something wrong with executing the
> script. It is worthy to note that I used the same script without any kind
> of change multiple times and it's working well. This's more confusing for
> me now, any hints?

What is the output from the cluster? I'm guessing the output is in the
file called "out" that's created each time the simulation fails.


>
> Thanks for the ionization/neutralization advice; surely I did that.
>
> Thanks,
> Mohammed
>
> On Thu, Aug 1, 2019 at 9:22 PM Justin Lemkul  wrote:
>
>>
>>
>> On 8/1/19 7:04 PM, Mohammed I Sorour wrote:
>> >> Hi,
>> >
>> > that's what the ls -l prints,
>> >
>> >
>> >
>> >
>> >> ls -l
>> >> total 193120
>> >> drwxr-xr-x 2 tuf73544 chem 4096 Jul 26 17:20 amber99sb_dyes.ff
>> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:56 em.gro
>> >> -rw-r--r-- 1 tuf73544 chem  191 Aug  1 15:04
>> equilibration_NVT_script
>> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:11 full_solv_ions.gro
>> >> -rw-r--r-- 1 tuf73544 chem 2164 Jul  8  2016 ions.itp
>> >> -rw-r--r-- 1 tuf73544 chem 33866956 Jul 31 19:08 ions.tpr
>> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:57 mdout.mdp
>> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:56 #mdout.mdp.1#
>> >> -rw-r--r-- 1 tuf73544 chem 1875 Jul 27 08:22 nvt.mdp
>> >> -rw-r--r-- 1 tuf73544 chem 39455312 Aug  1 14:57 nvt.tpr
>> >> -rw--- 1 tuf73544 chem 1032 Aug  1 15:04 out
>> >> -rw--- 1 tuf73544 chem 2395 Aug  1 15:04 out.err
>> >> -rw-r--r-- 1 tuf73544 chem38899 Jul 31 18:58
>> posre_DNA_chain_A.itp
>> >> -rw-r--r-- 1 tuf73544 chem39953 Jul 31 18:58
>> posre_DNA_chain_B.itp
>> >> -rw-r--r-- 1 tuf73544 chem 3215 Aug  1 14:57 residuetypes.dat
>> >> -rw-r--r-- 1 tuf73544 chem 4873 Jul 18 13:03 specbond.dat
>> >> -rw-r--r-- 1 tuf73544 chem69176 Jul 16 11:40 tip3p.gro
>> >> -rw-r--r-- 1 tuf73544 chem   588482 Jul 31 18:58
>> topol_DNA_chain_A.itp
>> >> -rw-r--r-- 1 tuf73544 chem   589283 Jul 31 18:58
>> topol_DNA_chain_B.itp
>> >> -rw--- 1 tuf73544 chem 1264 Jul 31 19:10 topol.top
>> >> drwxr-xr-x 3 tuf73544 chem 4096 Aug  1 14:18 trial
>>
>> Are you executing mdrun interactively, or via some kind of queuing
>> system with a submission script?
>>
>> Also you should *not* be running dynamics on a system with such a net
>> charge. Add salt and neutralize! It's not the source of your problem,
>> though.
>>
>> -Justin
>>
>> --
>> ==
>>
>> Justin A. Lemkul, Ph.D.
>> Assistant Professor
>> Office: 301 Fralin Hall
>> Lab: 303 Engel Hall
>>
>> Virginia Tech Department of Biochemistry
>> 340 West Campus Dr.
>> Blacksburg, VA 24061
>>
>> jalem...@vt.edu | (540) 231-3129
>> http://www.thelemkullab.com
>>
>> ==
>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
> a mail to gmx-users-requ...@gromacs.org.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

2019-08-02 Thread Mohammed I Sorour
Hi Justin,

Yes, I'm using a queuing system with a submission script.

#-l nodes=1:ppn=16

#PBS -l walltime=10:00:00

#PBS ­-qmedium

#PBS -N NVT

#PBS -e out.err

#PBS -o out



module load gromacs


cd ${PBS_O_WORKDIR}/;


gmx mdrun -deffnm nvt


Well, based on your hint, I executed a trial mdrun job without using the
queue. It seems to be working well and reading the .tpr file, I had to kill
the job once I made sure that it is reading the tpr file due to the
regulations of using the cluster out of the queue.
So, I should suspect that there is something wrong with executing the
script. It is worthy to note that I used the same script without any kind
of change multiple times and it's working well. This's more confusing for
me now, any hints?

Thanks for the ionization/neutralization advice; surely I did that.

Thanks,
Mohammed

On Thu, Aug 1, 2019 at 9:22 PM Justin Lemkul  wrote:

>
>
> On 8/1/19 7:04 PM, Mohammed I Sorour wrote:
> >> Hi,
> >
> > that's what the ls -l prints,
> >
> >
> >
> >
> >> ls -l
> >> total 193120
> >> drwxr-xr-x 2 tuf73544 chem 4096 Jul 26 17:20 amber99sb_dyes.ff
> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:56 em.gro
> >> -rw-r--r-- 1 tuf73544 chem  191 Aug  1 15:04
> equilibration_NVT_script
> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:11 full_solv_ions.gro
> >> -rw-r--r-- 1 tuf73544 chem 2164 Jul  8  2016 ions.itp
> >> -rw-r--r-- 1 tuf73544 chem 33866956 Jul 31 19:08 ions.tpr
> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:57 mdout.mdp
> >> -rw-r--r-- 1 tuf73544 chem11962 Aug  1 14:56 #mdout.mdp.1#
> >> -rw-r--r-- 1 tuf73544 chem 1875 Jul 27 08:22 nvt.mdp
> >> -rw-r--r-- 1 tuf73544 chem 39455312 Aug  1 14:57 nvt.tpr
> >> -rw--- 1 tuf73544 chem 1032 Aug  1 15:04 out
> >> -rw--- 1 tuf73544 chem 2395 Aug  1 15:04 out.err
> >> -rw-r--r-- 1 tuf73544 chem38899 Jul 31 18:58 posre_DNA_chain_A.itp
> >> -rw-r--r-- 1 tuf73544 chem39953 Jul 31 18:58 posre_DNA_chain_B.itp
> >> -rw-r--r-- 1 tuf73544 chem 3215 Aug  1 14:57 residuetypes.dat
> >> -rw-r--r-- 1 tuf73544 chem 4873 Jul 18 13:03 specbond.dat
> >> -rw-r--r-- 1 tuf73544 chem69176 Jul 16 11:40 tip3p.gro
> >> -rw-r--r-- 1 tuf73544 chem   588482 Jul 31 18:58 topol_DNA_chain_A.itp
> >> -rw-r--r-- 1 tuf73544 chem   589283 Jul 31 18:58 topol_DNA_chain_B.itp
> >> -rw--- 1 tuf73544 chem 1264 Jul 31 19:10 topol.top
> >> drwxr-xr-x 3 tuf73544 chem 4096 Aug  1 14:18 trial
>
> Are you executing mdrun interactively, or via some kind of queuing
> system with a submission script?
>
> Also you should *not* be running dynamics on a system with such a net
> charge. Add salt and neutralize! It's not the source of your problem,
> though.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Purpose of repeated improper dihedrals

2019-08-02 Thread Dawid das
Thank you for an asnwer. However, I still need to ask more.
So, let's take this time the
NR1  CPH1  CPH2  H
NR1  CPH2  CPH1  H

example. For such quartets,  the improper dihedral parameters are defined
for HSD residue.
Now, the bonding is as follows in part of HSD:
  H
  |
NR1
  /\
 CPH1CPH2

In my top file generated with pdb2gmx I can see the improper dihedral
entries for both
NR1  CPH1  CPH2  H
NR1  CPH2  CPH1  H
Also after dumping the tpr file I can see IDIHS entries for both
arrangement of atoms for the same molecule.

According to Gromacs manual, the entry for improper dihedral i j k l is
understood as follows
l
|
i
  /   \
j  k

So maybe I'm blind but I still don't really understand  the purpose of
changing the middle atom types
if there is in fact only one way these four atoms are connected in HSD. The
only reason I see is to
make it more rigid because I have "doubled" improper.

Best regards,
Dawid Grabarek

pt., 2 sie 2019 o 13:40 Justin Lemkul  napisał(a):

>
>
> On 8/2/19 2:38 AM, Dawid das wrote:
> > Dear All,
> >
> > Why some of the improper parameters in CHARMM27 FF are repeated with
> > the middle atom types in a different order as in, e.g.
> >
> > HR1 NR1 NR2 CPH22   0.  4.184
> > HR1 NR2 NR1 CPH22   0.  4.184
>
> These aren't repeated parameters. They are different parameters
> corresponding to different connectivity.
>
> -Justin
>
> > while some are not, e.g.
> > HR3 CPH1NR3 CPH12   0.  8.368
> >
> > Best regards,
> > Dawid Grabarek
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Purpose of repeated improper dihedrals

2019-08-02 Thread Justin Lemkul




On 8/2/19 2:38 AM, Dawid das wrote:

Dear All,

Why some of the improper parameters in CHARMM27 FF are repeated with
the middle atom types in a different order as in, e.g.

HR1 NR1 NR2 CPH22   0.  4.184
HR1 NR2 NR1 CPH22   0.  4.184


These aren't repeated parameters. They are different parameters 
corresponding to different connectivity.


-Justin


while some are not, e.g.
HR3 CPH1NR3 CPH12   0.  8.368

Best regards,
Dawid Grabarek


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-5.1.4 with CHARMM36 March 2019 RNA Residue Fatal Error

2019-08-02 Thread Justin Lemkul




On 8/2/19 12:22 AM, Joseph,Newlyn wrote:

Hello,


I'm running into the following error when trying to pdb2gmx my PDB file.


Program gmx pdb2gmx, VERSION 5.1.4
Source code file: 
/gpfs/apps/hpc.rhel7/Packages/Apps/Gromacs/5.1.4/Dist_514/gromacs-5.1.4/src/gromacs/gmxpreprocess/resall.c,
 line: 645

Fatal error:
Residue 'C' not found in residue topology database
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors?


I presume I'm naming my residues incorrectly, but upon closer inspection of the 
merged.rtp file within the forcefield, I see no section for RNA residues. I'm 
attempting to simulate an RNA that has the following as the first couple lines 
in the PDB:


In CHARMM, both DNA and RNA are named ADE, CYT, GUA, THY/URA and are 
generated as RNA. One then patches the RNA residue to become DNA by 
removing the 2'-OH. We can't do this in GROMACS, so there are fixed 
residue names.


RNA: ADE, CYT, GUA, URA
DNA: DA, DC, DG, DT

These are all in merged.rtp.

-Justin



REMARK  GENERATED BY CHARMM-GUI (HTTP://WWW.CHARMM-GUI.ORG) V2.0 ON OCT, 26. 
2018. JOB
REMARK  READ PDB, MANIPULATE STRUCTURE IF NEEDED, AND GENERATE TOPOLOGY FILE
REMARK   DATE:10/27/18  0:52: 0  CREATED BY USER: apache
ATOM  1  H5T   C A   1 -29.997 -20.428   3.250  1.00  0.00   H
ATOM  2  O5'   C A   1 -30.685 -20.928   3.695  1.00  0.00   O
ATOM  3  C5'   C A   1 -30.499 -22.286   3.303  1.00  0.00   C
ATOM  4  H5'   C A   1 -30.674 -22.932   4.190  1.00  0.00   H
ATOM  5 H5''   C A   1 -29.451 -22.408   2.957  1.00  0.00   H
ATOM  6  C4'   C A   1 -31.442 -22.699   2.192  1.00  0.00   C
ATOM  7  H4'   C A   1 -31.693 -23.777   2.284  1.00  0.00   H
ATOM  8  O4'   C A   1 -32.682 -21.940   2.275  1.00  0.00   O
ATOM  9  C1'   C A   1 -33.141 -21.611   0.975  1.00  0.00   C
ATOM 10  H1'   C A   1 -34.185 -21.975   0.869  1.00  0.00   H


Any help or suggestions?


Newlyn Joseph, M.S.
M.D. Candidate, Class of 2023
University of Connecticut School of Medicine
nejos...@uchc.edu | new.josep...@gmail.com
(203) 584-6402
sent from Outlook Web App


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] structural clashes make the structure look like broken?

2019-08-02 Thread sunyeping
Dear all,

I am doing MD of a protein-DNA complex. There are some structural clashes 
between the DNA and the protein, but both of them look complete (Please see the 
figure at 
https://drive.google.com/file/d/1K7xiv4Lf7UEggNIykWxXX2WRtclJxv2q/view?usp=sharing.
 The clashes are indicated by the red arrows). However, after I prepared the 
gro file with pdb2gmx, did energy minimization, when I did nvt and npt 
equilibrium, I found that one of the  DNA chain became broken (Please see the 
figure at 
https://drive.google.com/file/d/1FtGPWjhnEqbWchmQaWEA3YpSqWca5Eyd/view?usp=sharing.
 The broken site of the DNA is indicated by the red circle). 

I think the break in the DNA is caused by the structural clash between the DNA 
and the protein and the clashing atoms can not be shown in the visulization 
software (such as pymol and vmd). 

I am wondering whether the structural clashes will finally disappear in a 
prolonged simulation or it will kept. If the MD simulation will not remove the 
structural clashes, I should give up it. What do you think? Do you have any 
idea of removing the structural clashes in the initial structure?

Best regards.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] simulation on 2 gpus

2019-08-02 Thread Stefano Guglielmo
 Kevin, Mark,
thanks for sharing advices and experience.
I am facing some strange behaviour trying to run with the two gpus: there
are some combinations that "simply" make the system crash (the workstation
turns off after few seconds of running); in particular the following run:

gmx mdrun -deffnm run (-gpu_id 01) -pin on

"...
Using 16 MPI threads
Using 4 OpenMP threads per tMPI thread
On host pcpharm018 2 GPUs selected for this run.
Mapping of GPU IDs to the 16 GPU tasks in the 16 ranks on this node:

PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1,PP:1
PP tasks will do (non-perturbed) short-ranged and most bonded interactions
on the GPU
Pinning threads with an auto-selected logical core stride of 1"

Running the following command works without crashing, with 1 tmpi and 32
omp threads on 1 gpu only:
gmx mdrun -deffnm run -gpu_id 01 -pin on -pinstride 1 -pinoffset 32 -ntmpi
1.
The most efficient way  to run a single run seems to be produced by:
gmx mdrun -deffnm run -gpu_id 0 -ntmpi 1 -ntomp 28
which makes 86 ns/day for a system of about 100K atoms (1000 res. protein
with membrane and water).

I also tried to run two independent simulations, and again with the
following commands the system crashes:
gmx mdrun -deffnm run1 -gpu_id 1 -pin on -pinstride 1 -pinoffset 32 -ntomp
32 -ntmpi 1
gmx mdrun -deffnm run0 -gpu_id 0 -pin on -pinstride 1 -pinoffset 0 -ntomp
32 -ntmpi 1

"...
Using 1 MPI thread
Using 32 OpenMP threads
1 GPU selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 1 rank on this node:
  PP:1,PME:1
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PME tasks will do all aspects on the GPU
Applying core pinning offset 32."


Two runs can be carried out with the command:
gmx mdrun -deffnm run1 -gpu_id 1 -pin on -pinstride 1 -pinoffset 14 -ntmpi
1 -ntomp 28
gmx mdrun -deffnm run0 -gpu_id 0 -pin on -pinstride 1 -pinoffset 0 -ntmpi 1
-ntomp 28
or
gmx mdrun -deffnm run1 -gpu_id 1 -pin on -ntmpi 1 -ntomp 28
gmx mdrun -deffnm run0 -gpu_id 0 -pin on -ntmpi 1 -ntomp 28
In both situations there was a substantial degrading of performance, about
35-40 ns/day for the same system, with a gpu usage of 25-30%, compared to
50-55% for the single run on a single gpu, and much below the power cap.

I am also wondering if there is a way to "explicitly" pin threads to cpu
core: if I understand well from lscpu output, reported below, the
core/threads are organized in non-consecutive blocks of eight threads; in
order to optimize performance should this be "respected" when pinning?

"...
NUMA node0 CPU(s): 0-7,32-39
NUMA node1 CPU(s): 16-23,48-55
NUMA node2 CPU(s): 8-15,40-47
NUMA node3 CPU(s): 24-31,56-63
..."

Thanks
Stefano
-- 
Stefano GUGLIELMO PhD
Assistant Professor of Medicinal Chemistry
Department of Drug Science and Technology
Via P. Giuria 9
10125 Turin, ITALY
ph. +39 (0)11 6707178


Mail
priva di virus. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Il giorno ven 26 lug 2019 alle ore 15:00 Kevin Boyd 
ha scritto:

> Sure - you can do it 2 ways with normal Gromacs. Either run the simulations
> in separate terminals, or use ampersands to run them in the background of 1
> terminal.
>
> I'll give a concrete example for your threadripper, using 32 of your cores,
> so that you could run some other computation on the other 32. I typically
> make a bash variable with all the common arguments.
>
> Given tprs run1.tpr ...run4.tpr
>
> gmx_common="gmx mdrun -ntomp 8 -ntmpi 1 -pme gpu -nb gpu -pin on -pinstride
> 1"
> $gmx_common -deffnm run1 -pinoffset 32 -gputasks 00 &
> $gmx_common -deffnm run2 -pinoffest 40 -gputasks 00 &
> $gmx_common -deffnm run3 -pinoffset 48 -gputasks 11 &
> $gmx_common -deffnm run3 -pinoffset 56 -gputasks 11
>
> So run1 will run on cores 32-39, on GPU 0, run2 on cores 40-47 on the same
> GPU, and the other 2 runs will use GPU 1. Note the ampersands on the first
> 3 runs, so they'll go off in the background
>
> I should also have mentioned one peculiarity with running with -ntmpi 1 and
> -pme gpu, in that even though there's now only one rank (with nonbonded and
> PME both running on it), you still need 2 gpu tasks for that one rank, one
> for each type of interaction.
>
> As for multidir, I forget what troubles I ran into exactly, but I was
> unable to run some subset of simulations. Anyhow if you aren't running on a
> cluster, I see no reason to compile with MPI and have to use srun or slurm,
> and need to use gmx_mpi rather than gmx. The built-in thread-mpi gives you
> up to 64 threads, and can have a minor (<5% in my experience) performance
> benefit over MPI.
>
> Kevin
>
> On Fri, Jul 26, 2019 at 8:21 AM Gregory Man Kai Poon 
> wrote:
>
> > Hi Kevin,
> > Thanks for your very useful post.  Could you give a few command 

[gmx-users] Purpose of repeated improper dihedrals

2019-08-02 Thread Dawid das
Dear All,

Why some of the improper parameters in CHARMM27 FF are repeated with
the middle atom types in a different order as in, e.g.

HR1 NR1 NR2 CPH22   0.  4.184
HR1 NR2 NR1 CPH22   0.  4.184

while some are not, e.g.
HR3 CPH1NR3 CPH12   0.  8.368

Best regards,
Dawid Grabarek
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.