Re: [gmx-users] confusion about implicint solvent
Thank you all for the reply, now I'm much less confuse and thank you David for the paper. I wanted to try implicit simulations because I need to speed up a bit my simulations...but I think that now I'll try something else (coarse grained/remd). cheers Fra On Mon, 23 Sep 2013, at 08:28 PM, David van der Spoel wrote: On 2013-09-23 20:23, Justin Lemkul wrote: On 9/23/13 2:08 PM, Szilárd Páll wrote: Hi, Admittedly, both the documentation on these features and the communication on the known issues with these aspects of GROMACS has been lacking. Here's a brief summary/explanation: - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu which is essentially mdrun + OpenMM, hence it has some limitations, most notably it can only run on a single GPU. The performance, depending on setting, can be up to 10x higher than on the CPU. - GROMACS 4.6: the native GPU acceleration does supports only explicit solvent, mdrun + OpenMM is still available (exactly for implicit solvent runs), but has been moved to the contrib section which means that it is not fully supported. Moreover, OpenMM support - unless somebody volunteers for maintenance of the mdrun-OpenMM interface - will be dropped in the next release. I can't comment much on the implicit solvent code on the CPU side other than the fact that there have been issues which AFAIK limit the parallelization to a rather small number of cores, hence the achievable performance is also limited. I hope others can clarify this aspect. I never got the implicit code to run on more than 2 CPUs, and as I recall Berk hard-coded this due to a limitation involving constraints. It's been a couple years since I tried anything with implicit since (1) the OpenMM support was so buggy and incomplete on GPU and (2) the code ran an order of magnitude slower on CPU than the explicit solvent counterpart. -Justin And finally, even though this is not what you were asking, and likely not wanted to hear either: with implicit solvent your results will not be general enough to be useful, if e.g. hydrogen bonds are important. I would like to recommend my latest paper which shows how solvent entropy and enthalpy contribute in a complex manner to non-bonded interactions in a way that implicit solvent never could: http://pubs.acs.org/doi/abs/10.1021/ct400404q -- David van der Spoel, Ph.D., Professor of Biology Dept. of Cell Molec. Biol., Uppsala University. Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists -- Francesco Carbone PhD student Institute of Structural and Molecular Biology UCL, London fra.carbone...@ucl.ac.uk -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
[gmx-users] confusion about implicint solvent
Good afternoon everybody, I'm a bit confuse about gromacs performances with implicit solvent. I'm simulating a 1000 residues protein with explicit solvent, using both a cpu and a gpu cluster. With a gpu node (12 cores and 3 M2090 gpu ) I reach 10 ns/day, while with no gpu and 144 cores I got 34 ns/day. Because I have several mutants (more than 50) I have to reduce the average simulation time and I was considering different option such as the use of implicit solvent. I tried with both the clusters and using gromacs 4.6 and 4.5 but the performances are terrible (1 day for 100ps) comparing to the explicit solvent. I read all the other messages on the mailing-list and the documentation, but the mix of old and new features/posts really confuses me a lot. Here (http://www.gromacs.org/Documentation/Acceleration_and_parallelization) it is said that with the gpu 4.5 and implicit solvent I should expect a substantial speedup. Here ( http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM#Benchmark_results.3a_GROMACS_CPU_vs_GPU ) I found this sentence It is ultimately up to you as a user to decide what simulations setups to use, but we would like to emphasize the simply amazing implicit solvent performance provided by GPUs. I follow the advise found in the mailing list and read both the documentation (site and manual), but I can't figured it out what should I do. How can you guys have amazing performances? I also found this answer from a last March post (http://gromacs.5086.x6.nabble.com/Implicit-solvent-MD-is-not-fast-and-not-accurate-td5006659.html#none) that confuses me even more. Performance issues are known. There are plans to implement the implicit solvent code for GPU and perhaps allow for better parallelization, but I don't know what the status of all that is. As it stands (and as I have said before on this list and to the developers privately), the implicit code is largely unproductive because the performance is terrible. Should I skip the idea of using implicit solvent and try something else? these are a set of parameters that I used (also the -pd flag) ; Run parameters integrator = sd tinit = 0 nsteps = 5 dt= 0.002 ; Output control nstxout = 5000 nstvout = 5000 nstlog = 5000 nstenergy = 5000 nstxtcout= 5000 xtc_precision = 1000 energygrps = system ; Bond parameters continuation= no constraints = all-bonds constraint_algorithm = lincs lincs_iter = 1 lincs_order = 4 lincs_warnangle = 30 ; Neighborsearching ns_type = simple nstlist = 0 rlist= 0 rcoulomb= 0 rvdw = 0 ; Electrostatics coulombtype = cut-off pbc= no comm_mode= Angular implicit_solvent = GBSA gb_algorithm = OBC nstgbradii = 1.0 rgbradii = 0 gb_epsilon_solvent= 80 gb_dielectric_offset= 0.009 sa_algorithm = Ace-approximation sa_surface_tension= 0.0054 ; Temperature coupling tcoupl= v-rescale tc_grps = System tau_t = 0.1 ref_t = 310 ; Velocity generation gen_vel= yes ld_seed= -1 thank you for the help. cheers Francesco -- Francesco Carbone PhD student Institute of Structural and Molecular Biology UCL, London fra.carbone...@ucl.ac.uk -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] confusion about implicint solvent
Hi, Admittedly, both the documentation on these features and the communication on the known issues with these aspects of GROMACS has been lacking. Here's a brief summary/explanation: - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu which is essentially mdrun + OpenMM, hence it has some limitations, most notably it can only run on a single GPU. The performance, depending on setting, can be up to 10x higher than on the CPU. - GROMACS 4.6: the native GPU acceleration does supports only explicit solvent, mdrun + OpenMM is still available (exactly for implicit solvent runs), but has been moved to the contrib section which means that it is not fully supported. Moreover, OpenMM support - unless somebody volunteers for maintenance of the mdrun-OpenMM interface - will be dropped in the next release. I can't comment much on the implicit solvent code on the CPU side other than the fact that there have been issues which AFAIK limit the parallelization to a rather small number of cores, hence the achievable performance is also limited. I hope others can clarify this aspect. Cheers, -- Szilárd On Mon, Sep 23, 2013 at 7:34 PM, Francesco frac...@myopera.com wrote: Good afternoon everybody, I'm a bit confuse about gromacs performances with implicit solvent. I'm simulating a 1000 residues protein with explicit solvent, using both a cpu and a gpu cluster. With a gpu node (12 cores and 3 M2090 gpu ) I reach 10 ns/day, while with no gpu and 144 cores I got 34 ns/day. Because I have several mutants (more than 50) I have to reduce the average simulation time and I was considering different option such as the use of implicit solvent. I tried with both the clusters and using gromacs 4.6 and 4.5 but the performances are terrible (1 day for 100ps) comparing to the explicit solvent. I read all the other messages on the mailing-list and the documentation, but the mix of old and new features/posts really confuses me a lot. Here (http://www.gromacs.org/Documentation/Acceleration_and_parallelization) it is said that with the gpu 4.5 and implicit solvent I should expect a substantial speedup. Here ( http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM#Benchmark_results.3a_GROMACS_CPU_vs_GPU ) I found this sentence It is ultimately up to you as a user to decide what simulations setups to use, but we would like to emphasize the simply amazing implicit solvent performance provided by GPUs. I follow the advise found in the mailing list and read both the documentation (site and manual), but I can't figured it out what should I do. How can you guys have amazing performances? I also found this answer from a last March post (http://gromacs.5086.x6.nabble.com/Implicit-solvent-MD-is-not-fast-and-not-accurate-td5006659.html#none) that confuses me even more. Performance issues are known. There are plans to implement the implicit solvent code for GPU and perhaps allow for better parallelization, but I don't know what the status of all that is. As it stands (and as I have said before on this list and to the developers privately), the implicit code is largely unproductive because the performance is terrible. Should I skip the idea of using implicit solvent and try something else? these are a set of parameters that I used (also the -pd flag) ; Run parameters integrator = sd tinit = 0 nsteps = 5 dt= 0.002 ; Output control nstxout = 5000 nstvout = 5000 nstlog = 5000 nstenergy = 5000 nstxtcout= 5000 xtc_precision = 1000 energygrps = system ; Bond parameters continuation= no constraints = all-bonds constraint_algorithm = lincs lincs_iter = 1 lincs_order = 4 lincs_warnangle = 30 ; Neighborsearching ns_type = simple nstlist = 0 rlist= 0 rcoulomb= 0 rvdw = 0 ; Electrostatics coulombtype = cut-off pbc= no comm_mode= Angular implicit_solvent = GBSA gb_algorithm = OBC nstgbradii = 1.0 rgbradii = 0 gb_epsilon_solvent= 80 gb_dielectric_offset= 0.009 sa_algorithm = Ace-approximation sa_surface_tension= 0.0054 ; Temperature coupling tcoupl= v-rescale tc_grps = System tau_t = 0.1 ref_t = 310 ; Velocity generation gen_vel= yes ld_seed= -1 thank you for the help. cheers Francesco -- Francesco Carbone PhD student Institute of Structural and Molecular Biology UCL, London fra.carbone...@ucl.ac.uk -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at
Re: [gmx-users] confusion about implicint solvent
On 9/23/13 2:08 PM, Szilárd Páll wrote: Hi, Admittedly, both the documentation on these features and the communication on the known issues with these aspects of GROMACS has been lacking. Here's a brief summary/explanation: - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu which is essentially mdrun + OpenMM, hence it has some limitations, most notably it can only run on a single GPU. The performance, depending on setting, can be up to 10x higher than on the CPU. - GROMACS 4.6: the native GPU acceleration does supports only explicit solvent, mdrun + OpenMM is still available (exactly for implicit solvent runs), but has been moved to the contrib section which means that it is not fully supported. Moreover, OpenMM support - unless somebody volunteers for maintenance of the mdrun-OpenMM interface - will be dropped in the next release. I can't comment much on the implicit solvent code on the CPU side other than the fact that there have been issues which AFAIK limit the parallelization to a rather small number of cores, hence the achievable performance is also limited. I hope others can clarify this aspect. I never got the implicit code to run on more than 2 CPUs, and as I recall Berk hard-coded this due to a limitation involving constraints. It's been a couple years since I tried anything with implicit since (1) the OpenMM support was so buggy and incomplete on GPU and (2) the code ran an order of magnitude slower on CPU than the explicit solvent counterpart. -Justin -- == Justin A. Lemkul, Ph.D. Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 601 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 == -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Re: [gmx-users] confusion about implicint solvent
On Mon, Sep 23, 2013 at 8:08 PM, Szilárd Páll szilard.p...@cbr.su.se wrote: Hi, Admittedly, both the documentation on these features and the communication on the known issues with these aspects of GROMACS has been lacking. Here's a brief summary/explanation: - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu which is essentially mdrun + OpenMM, hence it has some limitations, most notably it can only run on a single GPU. The performance, depending on setting, can be up to 10x higher than on the CPU. - GROMACS 4.6: the native GPU acceleration does supports only explicit solvent, mdrun + OpenMM is still available (exactly for implicit solvent runs), but has been moved to the contrib section which means that it is not fully supported. Moreover, OpenMM support - unless somebody volunteers for maintenance of the mdrun-OpenMM interface - will be dropped in the next release. I can't comment much on the implicit solvent code on the CPU side other than the fact that there have been issues which AFAIK limit the parallelization to a rather small number of cores, hence the achievable performance is also limited. I hope others can clarify this aspect. IIRC the best 4.5 performance for CPU-only implicit solvent used infinite cut-offs and SIMD acceleration. The SIMD is certainly broken in 4.6 (and IIRC was explicitly disabled at some point after 4.6.3). There is limited enthusiasm for fixing things (e.g. see parts of http://redmine.gromacs.org/issues/1292) but nobody with the skills has so far applied the time to do so. As always with an open-source project, if you want something, be prepared to roll up your sleeves and work, or hit your knees and pray! :-) Mark Cheers, -- Szilárd On Mon, Sep 23, 2013 at 7:34 PM, Francesco frac...@myopera.com wrote: Good afternoon everybody, I'm a bit confuse about gromacs performances with implicit solvent. I'm simulating a 1000 residues protein with explicit solvent, using both a cpu and a gpu cluster. With a gpu node (12 cores and 3 M2090 gpu ) I reach 10 ns/day, while with no gpu and 144 cores I got 34 ns/day. Because I have several mutants (more than 50) I have to reduce the average simulation time and I was considering different option such as the use of implicit solvent. I tried with both the clusters and using gromacs 4.6 and 4.5 but the performances are terrible (1 day for 100ps) comparing to the explicit solvent. I read all the other messages on the mailing-list and the documentation, but the mix of old and new features/posts really confuses me a lot. Here (http://www.gromacs.org/Documentation/Acceleration_and_parallelization) it is said that with the gpu 4.5 and implicit solvent I should expect a substantial speedup. Here ( http://www.gromacs.org/Documentation/Installation_Instructions_4.5/GROMACS-OpenMM#Benchmark_results.3a_GROMACS_CPU_vs_GPU ) I found this sentence It is ultimately up to you as a user to decide what simulations setups to use, but we would like to emphasize the simply amazing implicit solvent performance provided by GPUs. I follow the advise found in the mailing list and read both the documentation (site and manual), but I can't figured it out what should I do. How can you guys have amazing performances? I also found this answer from a last March post (http://gromacs.5086.x6.nabble.com/Implicit-solvent-MD-is-not-fast-and-not-accurate-td5006659.html#none) that confuses me even more. Performance issues are known. There are plans to implement the implicit solvent code for GPU and perhaps allow for better parallelization, but I don't know what the status of all that is. As it stands (and as I have said before on this list and to the developers privately), the implicit code is largely unproductive because the performance is terrible. Should I skip the idea of using implicit solvent and try something else? these are a set of parameters that I used (also the -pd flag) ; Run parameters integrator = sd tinit = 0 nsteps = 5 dt= 0.002 ; Output control nstxout = 5000 nstvout = 5000 nstlog = 5000 nstenergy = 5000 nstxtcout= 5000 xtc_precision = 1000 energygrps = system ; Bond parameters continuation= no constraints = all-bonds constraint_algorithm = lincs lincs_iter = 1 lincs_order = 4 lincs_warnangle = 30 ; Neighborsearching ns_type = simple nstlist = 0 rlist= 0 rcoulomb= 0 rvdw = 0 ; Electrostatics coulombtype = cut-off pbc= no comm_mode= Angular implicit_solvent = GBSA gb_algorithm = OBC nstgbradii = 1.0 rgbradii = 0 gb_epsilon_solvent= 80 gb_dielectric_offset= 0.009
Re: [gmx-users] confusion about implicint solvent
On 2013-09-23 20:23, Justin Lemkul wrote: On 9/23/13 2:08 PM, Szilárd Páll wrote: Hi, Admittedly, both the documentation on these features and the communication on the known issues with these aspects of GROMACS has been lacking. Here's a brief summary/explanation: - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu which is essentially mdrun + OpenMM, hence it has some limitations, most notably it can only run on a single GPU. The performance, depending on setting, can be up to 10x higher than on the CPU. - GROMACS 4.6: the native GPU acceleration does supports only explicit solvent, mdrun + OpenMM is still available (exactly for implicit solvent runs), but has been moved to the contrib section which means that it is not fully supported. Moreover, OpenMM support - unless somebody volunteers for maintenance of the mdrun-OpenMM interface - will be dropped in the next release. I can't comment much on the implicit solvent code on the CPU side other than the fact that there have been issues which AFAIK limit the parallelization to a rather small number of cores, hence the achievable performance is also limited. I hope others can clarify this aspect. I never got the implicit code to run on more than 2 CPUs, and as I recall Berk hard-coded this due to a limitation involving constraints. It's been a couple years since I tried anything with implicit since (1) the OpenMM support was so buggy and incomplete on GPU and (2) the code ran an order of magnitude slower on CPU than the explicit solvent counterpart. -Justin And finally, even though this is not what you were asking, and likely not wanted to hear either: with implicit solvent your results will not be general enough to be useful, if e.g. hydrogen bonds are important. I would like to recommend my latest paper which shows how solvent entropy and enthalpy contribute in a complex manner to non-bonded interactions in a way that implicit solvent never could: http://pubs.acs.org/doi/abs/10.1021/ct400404q -- David van der Spoel, Ph.D., Professor of Biology Dept. of Cell Molec. Biol., Uppsala University. Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se -- gmx-users mailing listgmx-users@gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-users * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting! * Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-users-requ...@gromacs.org. * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists