Re: [gmx-users] NTMPI / NTOMP combination: 10 threads not "reasonable" for GROMACS?

2019-05-10 Thread Téletchéa Stéphane

Le 20/03/2019 à 22:42, Stéphane Téletchéa a écrit :

Dear all,




Those CPUs are 10 cores +HT (so not 4 or 6). Is it only a warning ?


Dear all,

Any answer from core developpers on this, should a file a bug?

Best,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Help on MD performance, GPU has less load than CPU.

2017-07-13 Thread Téletchéa Stéphane

Le 12/07/2017 à 18:15, Mark Abraham a écrit :

Hi,

Sure. But who has data that shows that e.g. a free-energy calculation with
the defaults produces lower quality observables than you get with the
defaults?

Mark


Hi,

As defaults are defaults ... who knows :-) To get number in front of 
these assumptions is hard, and probably nobody wants to do this on a 
large scale ... But I'm too close to holidays to argue on this point by now!


Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Help on MD performance, GPU has less load than CPU.

2017-07-12 Thread Téletchéa Stéphane

Le 11/07/2017 à 15:24, Mark Abraham a écrit :

Guessing wildly, the cost of your simulation is probably at least double
what the defaults would give, and for that cost, I'd want to know why.


Estimated colleague,

Since this is a wild guess, I'd think to add some guesses myself. I 
remember "some time" back having used a lower tolerance on Ewald for 
amber simulations (around amber 4/5/6 ...) and it was more common at 
this time I presume. This may also be linked to the fact that amber has 
a short cut-off at 8 angstrom for electrostatics ...

Someone apparently "ill" at the time already found this stane in 2009:

http://gromacs.org_gmx-users.maillist.sys.kth.narkive.com/vTjpMdwU/gromacs-preformance-versus-amber

Out of my memroy, I remembered using 10-6 for Ewald tolerance in AMBER, 
and this is mentioned here:


http://ambermd.org/Questions/ewald.html

... apparently linked to DNA simulation as found in JACS 117,4193 (1995)

In short, this value may come in back and forth for "historical" reasons 
(and misuse, of course).


Others may have additional comments :-)

Best,

Stéphane


--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs on GPU

2017-06-16 Thread Téletchéa Stéphane

Le 16/06/2017 à 20:07, Mohsen Ramezanpour a écrit :

Thanks Justin.

So, can we say that simulation on CPU and GPU (as far as we use the same
version of Gromacs) are compatible?

If yes, is that okay to continue a simulation which was done with CPU (say
till 100 ns) to 500 ns (using GPU)?
Or I should start from t=0 with GPU?

It is important for my case as the allocations on supercomputers change
from CPU to GPU. So, not sure if I should start all again or it is fine to
continue.

Cheers


Dear Moshen,

What I'm doing daily is using my "small" workstation with CPU then 
continue once equilibration and primary production has worked well.


Go for GPU for a boost in performance (big) once the primary steps are 
ok :-)


Best,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Performance advice for newest Pascal architecture

2017-03-09 Thread Téletchéa Stéphane

Dear colleagues,

We are willing to invest on nodes for GROMACS-specific calculations, and 
trying to best the best for our bucks (as everyone).


For now our decisions comes close to nodes using the following 
configuration:


2 * Xeon E5-2630 v4
1 P100 or 2 * P5000 or 2 * K40
Cluster node interconnection: Intel OmniPath

Our system will will range from 50k to 200k atoms most of the time, 
using AMBER-99SB-ILDn, GROMCAS 2016.1 and above.


I am aware of various benchmark and recommandations like "Best Bang for 
your Bucks", but is there any reference (internal may be) for latest 
Pascal architecture, or any general advice against/for ?


Thanks a lot in advance for the feedback, if we are able to benchmark on 
our systems using the different setups above we'll share as possible by 
the upstream vendor the results.


Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] topology

2017-02-27 Thread Téletchéa Stéphane

Le 27/02/2017 à 13:20, Dayhoff, Guy a écrit :

I have made changes to my topol.top file.

Now in my command for em.mdp I am getting 19 errors stating "NO DEFAULT
BOND TYPE, ANGLE TYPE, ETC.”


Dear Guy,

Just to be sure, pay attention to the order of molecules in your top 
file, if your protein is before the ligand in the gro file (and waters, 
and ions, etc), the includes in your top file should match exactly the 
same order, for instance:


; Include geneic force field
#include "forcefield.itp"

; Include ligand-specific force-field
#include "ligand.itp"

HTH,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein 
Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 
Nantes cedex 03, France

Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS 2016 release candidate available!

2016-07-13 Thread Téletchéa Stéphane

Le 11/07/2016 16:22, Mark Abraham a écrit :

  Please do build it and try it out on your important workload - if
it's broken when doing essential dynamics with velocity-verlet, on an AMD
GPU running OpenCL, with LJ-PME, you might be the only person who can help
us learn that!


Dear Mark and GROMACS developpers,

I have downloaded and built it fine with (and without) CUDA.

So far I have tested on my machine and some tests fail, so I'll test 
again next week,
the benchmarks are from http://www.gromacs.org/GPU_acceleration (RNAse, 
Villin, ADH).


Failing tests are non cuda gmx run (it seems the cuda runs are fine) :

(gmx mdrun) rnase_dodec_vsites pme_verlet_vsites.mdp
(mdrun_cuda) rnase_dodec_vsites pme_verlet_vsites.mdp Performance:  
131.7320.182

rnase_dodec_vsites rf_verlet_vsites.mdp
rnase_dodec_vsites rf_verlet_vsites.mdp Performance: 154.6710.155
villin_vsites pme_verlet_vsites.mdp
villin_vsites pme_verlet_vsites.mdp Performance:  351.318 0.068
villin_vsites rf_verlet_vsites.mdp
villin_vsites rf_verlet_vsites.mdp Performance:  405.076 0.059

For the other tests it works ok :
adh_cubic pme_verlet_vsites.mdp Performance:8.349 2.875
adh_cubic pme_verlet_vsites.mdp Performance:   10.274 2.336
adh_cubic rf_verlet_vsites.mdp Performance:8.505 2.822
adh_cubic rf_verlet_vsites.mdp Performance:   10.378 2.313
adh_cubic_vsites pme_verlet_vsites.mdp Performance: 18.0471.330
adh_cubic_vsites pme_verlet_vsites.mdp Performance: 23.3491.028
adh_cubic_vsites rf_verlet_vsites.mdp Performance: 22.4281.070
adh_cubic_vsites rf_verlet_vsites.mdp Performance: 21.7431.104
adh_dodec pme_verlet_vsites.mdp Performance:9.643 2.489
adh_dodec pme_verlet_vsites.mdp Performance:   13.856 1.732
adh_dodec rf_verlet_vsites.mdp Performance:9.355 2.565
adh_dodec rf_verlet_vsites.mdp Performance:   12.181 1.970
adh_dodec_vsites pme_verlet_vsites.mdp Performance: 19.9751.201
adh_dodec_vsites pme_verlet_vsites.mdp Performance: 31.1700.770
adh_dodec_vsites rf_verlet_vsites.mdp Performance: 27.1700.883
adh_dodec_vsites rf_verlet_vsites.mdp Performance: 29.7340.807
rnase_cubic pme_verlet_vsites.mdp Performance:   43.722 0.549
rnase_cubic pme_verlet_vsites.mdp Performance:   52.268 0.459
rnase_cubic rf_verlet_vsites.mdp Performance:   41.776 0.574
rnase_cubic rf_verlet_vsites.mdp Performance:   52.487 0.457
rnase_dodec pme_verlet_vsites.mdp Performance:   50.300 0.477
rnase_dodec pme_verlet_vsites.mdp Performance:   70.137 0.342
rnase_dodec rf_verlet_vsites.mdp Performance:   50.141 0.479
rnase_dodec rf_verlet_vsites.mdp Performance:   70.940 0.338

(this is a simple grep for 'Performance' on md.log).

The "bench" is launched using a script :

#!/bin/bash

gmxver=2.016-rc1
rm bench-$gmxver

module load gromacs/$gmxver

for d in adh_cubic adh_cubic_vsites adh_dodec adh_dodec_vsites 
rnase_cubic rnase_dodec rnase_dodec_vsites villin_vsites;

do
for p in pme_verlet_vsites.mdp rf_verlet_vsites.mdp;
do

cd $d
gmx grompp -f $p
gmx mdrun -pin on
echo "$d $p `grep Performance md.log`" >> ../bench-$gmxver
mdrun_cuda -pin on
echo "$d $p `grep Performance md.log`" >> ../bench-$gmxver
cd ..
done
done

I'll check on other systems and again on these system later on, but if 
this rings a bell, I thought It would be helpfull.


More information on next monday probably.

Best,

Stéphane



--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein Design 
In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs 5.1.2 mdrun can't detect GPU

2016-04-22 Thread Téletchéa Stéphane

Le 22/04/2016 14:33, Szilárd Páll a écrit :

Additionally, I suggest amending the post to note that the GDK and NVML
that's picked up from it is optional and it is only useful with
Quadro/Tesla cards.


Dear Szilárd,

I hope I'll find time to produce a proper bug report, but using the 
Ubuntu binaries
with the ubuntu cuda runtime (so exact same version) 352.xx (same for 
cuda and nvidia driver),
gromacs failed to detect the GPU. Up-to-now I have not traced down where 
the problems lies.


This was I think already stated in the previous entry (section at the 
end: To sum up*

avoid ubuntu packages and install upstream drivers and dependencies*.)

But since it seems not clear enough I have corrected some typos and 
added the disclaimer
as requested. I hope Gromacs developper do not feel "angry", this was 
not my goal, I am a long
term GNU/Linux contributor and keen to echaustive bug reports. This 
article was more a short recipe

to get it working and not a complaint.

I hope the clarifications on the web site are now better, and this it 
may help others

in getting a proprely working installation ...

Best,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein Design 
In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs 5.1.2 mdrun can't detect GPU

2016-04-22 Thread Téletchéa Stéphane

Le 22/04/2016 01:50, treinz a écrit :

cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL


Unfortunately for any machine installed by now the setup is now as 
described,

and works.

There is one machine were this setup was not updated and YES I have the 
same result as you :


./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL


On this machine the driver is rather old:

dpkg -l |grep nvidia
ii  nvidia-331 340.96-0ubuntu0.14.04.1 amd64 
Transitional package for nvidia-331
ii  nvidia-340 340.96-0ubuntu0.14.04.1 amd64 
NVIDIA binary driver - version 340.96
ii  nvidia-340-uvm 340.96-0ubuntu0.14.04.1 
amd64 Transitional package for nvidia-340
ii  nvidia-libopencl1-340 
340.96-0ubuntu0.14.04.1 amd64 NVIDIA OpenCL 
Driver and ICD Loader library
ii  nvidia-opencl-icd-340 
340.96-0ubuntu0.14.04.1 amd64 NVIDIA OpenCL ICD
ii  nvidia-prime 0.6.2   
amd64 Tools to enable NVIDIA's Prime
ii  nvidia-settings 331.20-0ubuntu8 
amd64 Tool for configuring the NVIDIA graphics driver


But I wrote my "tutorial" since the latest package available in Ubuntu 
is "nvidia-352 - NVIDIA binary driver - version 352.63",
and apparently cuda expects 352.79, the minor revision in the driver is 
sufficient to have CUDA + GROMACS failing,

and apparently this is from CUDA :-)

You have the answer: use the exact driver of the CUDA runtime and 
nothing else ...


Best,

Stéphane




--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein Design 
In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] sugar puckering

2016-04-21 Thread Téletchéa Stéphane

Le 21/04/2016 04:24, bharat gupta a écrit :

Dear Gmx Users,

I am interested in calculating cremer-pople parameters for a trisachharide
ligand from its simulation docked with a protein. I found one tool
g_puckering for calculating the parameters but it was written for Gromacs
version 4.0.x and I am using version 5.0.4. I am not able to compile this
tool for my current version of gromacs. So can anybody tell me how can I
calculate such parameters in gromacs ??



1 - contact the authors
2 - adjust the code (I'm interested if you do it ...)
3 - compile it using the version it was meant for, if it parses xtc 
files it should not matter a lot ...


Best,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein Design 
In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] creating representative structures

2016-03-20 Thread Téletchéa Stéphane

Le 16/03/2016 16:54, Shyno Mathew a écrit :

I wrote a tcl script to do cluster analysis, using gromos method. My
results are slightly different from the g_cluster results. I see the
difference is coming from RMSD values. For example, with g_cluster, “The
RMSD ranges from 0.149614 to 0.220387 nm”


However with the tcl script, and after converting RMSD from Å to nm, the
value ranges from 0.173347 to 0.234409 nm! I am using the same selection in
both cases for RMSD calculations.


Dear Shyno,

As Tserk mentioned earlier you may have different selections, but
be very careful when you compare floating point numbers especially
in interpreted languages, since most of the time their precision is not
very "reliable".
See the discussions in http://wiki.tcl.tk/11969 and 
http://wiki.tcl.tk/879 for instance...


Either way you have to test :-)

Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS, UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25, Nantes cedex 03, France
Tél : +33 251 125 636 - Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] creating representative structures

2016-02-09 Thread Téletchéa Stéphane

Le 08/02/2016 18:23, Shyno Mathew a écrit :

I have few questions regarding creating representative structures.


Dear Shyno,

For what its worth, I have been playing with gmx cluster settings lately,
and found the "Jarvis-Patrick" method (default parameters) to provide
me more accurate and "independent" snapshots of the  100 ns-long
simulation (150 000 atoms).

The bad news is that it took nearly 5 days to compute, but considering 
the results,

this is really what I was searching for.

HTH,

Stéphane

--
Assistant Professor in BioInformatics, UFIP, UMR 6286 CNRS, Team Protein Design 
In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Short how-to for installing GROMACS with CUDA ...

2015-12-17 Thread Téletchéa Stéphane

Le 17/12/2015 18:06, Szilárd Páll a écrit :

PS: One more thing. If the CUDA SDK samples linked against the CUDA runtime
library (libcudart) did really work and gmx/mdrun did not (assuming the
same driver/kernel module), the only reasonable explanation I can think of
is that the two were using different runtimes. Note that GROMACS sets
RPATH, so it does not need nor is it affected by LD_LIBRARY_PATH tinkering
while the SDK samples need LD_LIBRARY_PATH to point to the correct
libcudart!


OK, I'll open a bug report on it, and for the purging of configuration 
files,
of course I did pay a lot of attention to this, again I went straight to 
the point

for newcomers so they have an "executive résumé", but this seems also
strange to me, so we'll continue on the bug report :-)
This will certainly be in January now, when the machine will not be busy 
anymore :-)


Best,

Stéphane

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Short how-to for installing GROMACS with CUDA ...

2015-12-16 Thread Téletchéa Stéphane

Dear all,

I have struggled recently in getting gromacs-aware of cuda capabilities.
After searching for a "while" (one afternoon), I removed the ubuntu-provided
drivers and packages (which worked in the past) and installed everything
"from scratch". It seems this is both coming from NVIDIA requiring the
"GPU deployment kit" in addition to the cuda toolkit, and from GROMACS
only warning about the missing NVML (but not failing while asking for 
GPU compilation).


In short, I have put the executive commands there:
http://www.steletch.org/spip.php?article89

As some messages on the list are about this "problem", I thought it 
would be

helpful to all.

Best,

Stéphane

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How can I split .xtc file to two part?

2015-11-23 Thread Téletchéa Stéphane

Le 23/11/2015 15:56, Hassan Aaryapour a écrit :

for resolving this
problem, can I split .xtc file to two part? how?

Dear Hassan,

Except if you have a specific interest in water molecules,
you should probably reduce your system to the molecule of interest
by doing a subselection of it (only the protein for instance),
and then visualize it in vmd. You will get rid of the memory limit, 
since 16GB

is already large enough for most systems :-)

Best,

Stéphane

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] 5ns simulation in 2 hours

2015-10-29 Thread Téletchéa Stéphane

Le 29/10/2015 04:24, Sana Saeed a écrit :

is it possible?

yes.

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs GPU got hang

2015-09-30 Thread Téletchéa Stéphane

Le 29/09/2015 23:40, M Teguh Satria a écrit :

Any of you experiencing similar problem ? Is there any way to
troubleshoot/debug to see the cause ? Because I didn't get any warning or
error message.


Hello,

This can be a driver issue (or hardware, think of temperature, dust, ...),
and happens to me from time to time.

The only solution I found was to reset the GPU (see nvidia-smi options),
if this is not sufficient you will have to reboot (and use the cold boot:
turn off the computer for more than 30s, and then boot again).

If this happens too often, you may have a defective card, see your 
vendor in that

case...

Best,

Stéphane Téletchéa

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] files related to commands in C or FORTRAN

2015-06-12 Thread Téletchéa Stéphane

Le 12/06/2015 20:44, Justin Lemkul a écrit :
If you're looking for the source code, download it from the GROMACS 
website. Packaged distributions don't include the source.


-Justin 


It's simpler than downloading the file, use the -devel or -dev 
package from the officiale repositories, for instance:


- https://apps.fedoraproject.org/packages/gromacs-devel/ in Fedora
- http://packages.ubuntu.com/trusty/amd64/gromacs-dev in Ubuntu

But for sure using the upstream tarball and reading the README and 
INSTALL files (and others) is certainly

recommended :-)

Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS, UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25, Nantes cedex 03, France
Tél : +33 251 125 636 - Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Unable to download GROMACS 5.x series

2015-06-09 Thread Téletchéa Stéphane

Dear all,

It seems the latest gromacs series have disappeared from the official 
download links

(http://www.gromacs.org/Downloads), could it be fixed please?

Thanks a lot in advance,

Stéphane Téletchéa

--
Assistant Professor, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?

2015-02-24 Thread Téletchéa Stéphane

Le 24/02/2015 13:29, David McGiven a écrit :

I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test

system using a 2 fs time step I get
24 ns/d on 64 AMD   cores 6272
16 ns/d on 32 AMD   cores 6380
36 ns/d on 32 AMD   cores 6380   with 1x GTX 980
40 ns/d on 32 AMD   cores 6380   with 2x GTX 980
27 ns/d on 20 Intel cores 2680v2
52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980
62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980

I think 20 Intel cores means 2 x 10 cores each.

But Szilard just mentioned in this same thread :

If you can afford them get the 14/16 or 18 core v3 Haswells, those are

*really*  fast, but a pair can cost as much as a decent car.


I know for sure gromacs escalates VERY well on 4 x 16 cores latests AMD
(Interlagos, Bulldozer, etc.) machines. But have no experience with Intel
Xeon.


My experience with latest gromacs and fftw build on my machine is that
one should not consider the hyperthreaded cores , but only the real 
cores.


My system has 24 cores (E5-2620 v2 @ 2.10GHz + NVIDIA K4000), but 
really only 12 real cores.


Using pin, running only one test system with optimized conditions I used 
the benchmarks
available at the gromacs web site (ADH, rnase, villin, 
http://www.gromacs.org/GPU_acceleration),


My results were :

*** rnase_cubic
45,75 ns/day with -nt  6 and gpu on
47,10 ns/day with -nt 12 and gpu on
27,66 ns/day with -nt 24 and gpu on
35,31 ns/day with -nt 12 and gpu off
21,37 ns/day with -nt 24 and gpu off

The results are more or less similar in the other benchmarks, 6 cores + 
GPU close to 12 cores + GPU, and faster than 24 cores...


The difference in the GPU case is the aveage GPU usage, which is more 
than 85 % during the tests runs when not all processors are in use while 
it drops to 50 % if all cores are in use (using a rough observation of 
the GPU usage using nvidia-smi-tool).


I have no explanation for the CPU-only benchmarked though, since I have 
enabled or disabled pinning, ensured that only one job was running at a 
time, etc. I have not played a lot with -nt, either omp or mpi, since 
this machine is a single node.


Hope this helps in showing that more expensive may not be the way...

Best,

Stéphane

--
Lecturer, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?

2015-02-24 Thread Téletchéa Stéphane

Le 24/02/2015 17:18, Szilárd Páll a écrit :

Thanks! Let me note that those observations are particular to your
machine. There are multiple factors that cumulatively affect the
multi-threaded scaling:
- physical vs HT threads
- crossing socket boundaries
- iteration time/data per thread
- GPU and GPU performance

In your case all these three factors are somewhat disadvantageous for
good scaling. You have two sockets so your runs are crossing CPU
socket boundaries. The input is quite small and with GPUs the
HyperThreading disadvatages can increase - especially with a slow GPU.

Also note:
- your Quadro 4000 can likely not keep up with the 12 CPU cores and
there is probably some Wait GPU time (see log file)
- if you want to test 1 CPU + 1 GPU using HT vs not using it you
should run make sure to run with -pinstride 1 -ntomp 12 in the
latter case!
- -nt is partially deprecated/backward compatibility flag and should
only be used if its meaning is use this many tMPI or OpenMP threads
and decide which one is better, which is not the case here!

Cheers,
Sz.


Dear Szilard,

Thanks for the informations, this was a rapid bench, but I have all the 
logs if needed.
I know this is bounded to my system and setup but if that can help 
others, I'd be happy in
extending my tests with required parameters and adding them to the wiki 
if needed.


Concerning the Wait GPU time, you are right, the numbers go from 8% to 
72.4% ...


Just let me know if you need more data and logs, I'd be happy to extend 
this benchmark to some
other computers available here with variable setups and hardware 
(including amd too),
to share on real cases what should be an optimal setting for 
performance/cpu best throughput.


Best,

Stéphane

--
Lecturer, UFIP, UMR 6286 CNRS, Team Protein Design In Silico
UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes 
cedex 03, France
Tél : +33 251 125 636 / Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Installing GTX 980 on linux - which distro should i use?

2014-12-22 Thread Téletchéa Stéphane

Le 22/12/2014 17:47, Carlos Navarro Retamal a écrit :

I just bought a workstation with 2 GTX 980 (in order to improve my simulations 
on gromacs), but i’m not able to install it properly.
I first start with Centos 7, but my motherboard has an issue with this version, 
so i couldn’t pass the boot step.


I would recommend Ubuntu but choose the server edition, otherwise you 
may encounter problems
for installing the system. My choice goes for 14.04 since this is an LTS 
release so you know that your

system will be stable for some time.

If possible, for critical applications, use the nvidia binaries (driver 
and cuda), be also sure to *not* mess
with nvidia from the repositories and from upstream nvidia, otherwise 
you'll have to remove some parts

by hand (nvidia-common comes to my mind, but also dkms-compiled binaries).

I will recommend against Scientific Linux or other CentOS-like 
distribution since they offer for free
the recompiled version of the Red Hat upstream release, but without 
professional support and a lot of
segfaults with their recompiled binaries (a simple 'dmesg' on installed 
computers would show you that).


Good luck for your installation, at first this can be really an headache.

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS, UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25, Nantes cedex 03, France
Tél : +33 251 125 636 - Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding Gromacs 5.0.3 parallel computation

2014-12-10 Thread Téletchéa Stéphane

Le 10/12/2014 07:22, Bikash Ranjan Sahoo a écrit :

I tried to do a small simulation in Gromacs 4.5.5 using 30 cores for 200
ps. The computation time was 4.56 minutes . The command used was dplace -c
0-29 mdrun -v -s md.tpr -c md.gro -nt 30 .

Next I ran the same system using Gromacs 5.0.3. The command used was
dplace -c 0-29 mpirun -np 30 mdrun_mpi -v -s md.tpr -c md.gro. The
simulation was extremely slow and took 37 minutes to complete only 200 ps
MD.


Dear Bikash,

You are more or less benchmarking threads versus mpi, right?

Note also that dplace for such rounded number (30) is probably not
optimal either except if your number of processors is in the 
multiplicity of 30 (hexacores?).


Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS,
UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25,
44322 Nantes cedex 03, France
Tél : +33 251 125 636
Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding Gromacs 5.0.3 parallel computation

2014-12-10 Thread Téletchéa Stéphane

Le 10/12/2014 12:28, Bikash Ranjan Sahoo a écrit :

Dear Dr. Stéphane,
 Thank you for your quick reply. How can I solve this? Can you please 
guide me in getting the right command to run mdrun as fast as gromacs 
4.5.5 -nt command. I am pasting the architecture of my cluster below. 
Kindly help me understanding how can I modify the mdrun_mpi command to 
access multi cores in my cluster.




Dear Bikash,

Why not using the -nt option in gromacs 5.0.3 too?
My point was that you should use the same parameters for comparison of 
performance...


What I would do is:

mdrun -nt 30 -pin auto

Try first using the gromacs tools (-pin auto) than using the dplace 
command since in your case
you ask it to split you job on different cpu according to the diagram 
you showed. At least you could

use the logical architecture for the dplace command:
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linuxdb=bkssrch=fname=/SGI_Developer/LX_AppTune/sgi_html/ch05.html

Last, you should also try to first tune the pme decomposition using the 
tunepme option on a short run
to see if the rather rough domain decomposition done by mdrun in a first 
approach is optimal for your system.


You should also try to use cpu multiplicity and power of two for maximal 
performance,
in your case, probably something like -nt 12, -nt 24 or -nt 36, since 
each cpu seems to be a 12-core ...


See their respective manuals and command line helps for more info.

Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS,
UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25,
44322 Nantes cedex 03, France
Tél : +33 251 125 636
Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Pressure Question

2014-11-06 Thread Téletchéa Stéphane

Le 06/11/2014 06:16, Antonio Baptista a écrit :


In particular, the virial-based instantaneous pressure (call it P') 
computed in simulations has its ensemble average equal to the 
thermodynamic pressure P (check any good book on molecular 
simulation). But, as others already pointed out, this P' is well-known 
to show extremelly large fluctuations, meaning that its average 
computed from the simulation has usually a very large statistical 
spread. In other words, although the ensemble average of P' is 
strictly equal to P, its simulation average is a random variable that 
often shows large deviations from P (especially for short 
simulations). To get an idea of what is an acceptable error for the 
average of P', you may look at its distribution histogram in the NPT 
simulation. 


Dear Antonio,

Sorry if my message sound aggressive when I talked about totally 
irrevelevant, I will clarify my thoughts.


From a theoretical point of view, you are right, each ensemble is 
accessible.


From a biological point of view, though, the concept of fixing the 
volume is less reasonable:
we live at constant pressure and temperature, and also at tighly 
controlled pH, and salt concentrations.


The volume varies though, as you feel it when the weather is getting hot 
or cold.


My point was exactly what your are telling in a more formal way than me:
this P' is well-known to show extremely large fluctuations

Well, digging a bit more on my feeling, I also found opposite 
arguments on the AMBER mailing list,

like here: http://archive.ambermd.org/201103/0431.html

So I'll got back again on my research and adjust my mind on the actual 
bleeding edge simulations

taking into account all the recent code and force fields progresses.

Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS,
UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25,
44322 Nantes cedex 03, France
Tél : +33 251 125 636
Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Pressure Question

2014-11-05 Thread Téletchéa Stéphane

Le 04/11/2014 18:00, Johnny Lu a écrit :

Hi.

If my NVT simulation of a protein in 30k molecules of water has a pressure
of 11 bar (error 0.5 bar from g_energy), will the dynamics (not
distribution of conformations) change enough that the mechanism inferred
from this simulation be significantly more unreliable than the mechanism
inferred from a 1 bar simulation? (Will the reviewers cut my paper into
ribbons?)

Thanks again.


Hi,

Considering only your NVT parameters for your simulation,
I would consider it totally irrelevant to talk about pressure where 
your constrain the volume.
This value or any other one has not really a meaning in this situation, 
and I seen
many variations in the pressure value in this microcanonical ensemble 
without paying too much

attention on it.

In an NPT simulation, then you should be able to find back a normal 1 
bar simulation I think.


Do you have any reason to do first an NPT simulation, and then an NVT one?
I would personally let the system equilibrate in NVT, then swith to the 
more natural NPT,

provided actual code and force fields are now good enough in this ensemble.

Best,

Stéphane

--
Team Protein Design In Silico
UFIP, UMR 6286 CNRS,
UFR Sciences et Techniques,
2, rue de la Houssinière, Bât. 25,
44322 Nantes cedex 03, France
Tél : +33 251 125 636
Fax : +33 251 125 632
http://www.ufip.univ-nantes.fr/ - http://www.steletch.org

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.