Re: [gmx-users] several runs
Hi, The interference is just that the runs will be ridiculously slow. You want either to arrange life to run one simulation at a time, or manually allocate cores to separate simulations, e.g. as described at http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores Mark On Tue, Feb 24, 2015 at 8:40 PM, mah maz mahma...@gmail.com wrote: Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Computing Resource - Laptop
Sorry I meant desktop, laptop must have been a mental error as I'm looking for a new personal laptop. -Douglas Grahame -Original Message- From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd Páll Sent: February 24, 2015 3:20 PM To: Discussion list for GROMACS users Subject: Re: [gmx-users] Computing Resource - Laptop Did you mean laptop, desktop or both? I To be honest, would not use laptops for anything but lightweight analysis tasks. -- Szilárd On Tue, Feb 24, 2015 at 6:10 PM, Douglas Grahame dgrah...@uoguelph.ca wrote: Hey everyone I'm not sure if this is the place to post this or not so my apologies if it is not. Our lab recently got some funds to put towards a desktop for molecular dynamics work and we have a budget of aprx. $4,000 CDN for the laptop. Given that I am not an expert in the hardware area, nor do I have a ton in the simulation area either, I wanted to see if there was any suggestions or resources or even experiences that this mailing list may have so that we can get the most out of our money. Primarily the computer will be used to run GROMACS and be used for analysis and some small scale simulation work. We do have access to supercomputing clusters which will serve as the primary resource for modelling. Thanks for your help in advance! -Douglas Grahame --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
mdrun -multi works too if the runs are identical. Or write your own bash wrapper that calculates thread counts and applies pin offsets manually (see the link Mark posted for pinning info). -- Szilárd On Tue, Feb 24, 2015 at 8:40 PM, mah maz mahma...@gmail.com wrote: Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
Perhaps I can be of help here. Except for the smallest systems, GROMACS simulations are very compute-intensive, so a single machine is needed for a single job. In many instances, several machines are required for a single job (provided you have a fast enough network). If you give a single machine more than one compute job, or if you require of it more computing power than if physically has (like asking for 8 cpus when the machine only has 6) that's called oversubscribing the machine. It causes severe performance degradation, as the machine has more work than it can handle. Not only in terms of CPU, but also because the input/output channels will saturate So, if a single simulation requires all the resources in your machine, running several will make it unusable. It is a lot better to run a single job per machine. Hope this helps. Victor 2015-02-24 13:40 GMT-06:00 mah maz mahma...@gmail.com: Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Doubt about energies in a very simple system
Hi, In addition to Justin's points about comparing apples with apples on the same coordinates, by default in the Verlet scheme GROMACS also shifts the potential to be zero at the cutoff, so that the potential is actually the integral of the force (mentioned in manual 3.4.2, along with a sea of other things, sorry). You can find several threads in the archive where people have tried to compare with other MD programs, chiefly ACEMD, IIRC. So that effect is more or less noteworthy depending on the system and settings. Mark On Tue, Feb 24, 2015 at 7:30 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/24/15 11:42 AM, IÑIGO SAENZ wrote: Hi, I designed a very simple system that is composed of only one Glutamine with Tleap. I've transformed the corresponding .prmtop and .inpcrd into .top and .gro files, using a conversor that I'm developing. I attach .top .gro and .mdp files [ defaults ] ; nbfunccomb-rule gen-pairs fudgeLJ fudgeQQ 12yes0.50.8 [ atomtypes ] ;name bond_type mass charge ptype sigma epsilon Amb AA AA 0.0 0.0 A 3.25000e-01 7.11280e-01 ; 1.82 0.1700 AB AB 0.0 0.0 A 1.06908e-01 6.56888e-02 ; 0.60 0.0157 AC AC 0.0 0.0 A 3.39967e-01 4.57730e-01 ; 1.91 0.1094 AD AD 0.0 0.0 A 2.47135e-01 6.56888e-02 ; 1.39 0.0157 AE AE 0.0 0.0 A 2.64953e-01 6.56888e-02 ; 1.49 0.0157 AF AF 0.0 0.0 A 3.39967e-01 3.59824e-01 ; 1.91 0.0860 AG AG 0.0 0.0 A 2.95992e-01 8.78640e-01 ; 1.66 0.2100 [ moleculetype ] ;namenrexcl sys 0 [ atoms ] ; nr type resi res atom cgnr charge mass ; qtot 1 AA 1 xxxAA1-0.516300 14.01000 ; -0.516 2 AB 1 xxxAB2 0.293600 1.00800 ; -0.223 3 AC 1 xxxAC3 0.039700 12.01000 ; -0.183 4 AD 1 xxxAD4 0.110500 1.00800 ; -0.073 5 AC 1 xxxAC5 0.056000 12.01000 ; -0.017 6 AE 1 xxxAE6-0.017300 1.00800 ; -0.034 7 AE 1 xxxAE7-0.017300 1.00800 ; -0.051 8 AC 1 xxxAC8 0.013600 12.01000 ; -0.038 9 AE 1 xxxAE9-0.042500 1.00800 ; -0.080 10 AE 1 xxxAE 10-0.042500 1.00800 ; -0.123 11 AF 1 xxxAF 11 0.805400 12.01000 ; 0.683 12 AG 1 xxxAG 12-0.818800 16.0 ; -0.136 13 AG 1 xxxAG 13-0.818800 16.0 ; -0.955 14 AF 1 xxxAF 14 0.536600 12.01000 ; -0.418 15 AG 1 xxxAG 15-0.581900 16.0 ; -1.000 [ pairs ] 1 6 1 1 7 1 1 8 1 1 151 2 4 1 2 5 1 2 141 3 9 1 3 101 3 111 4 6 1 4 7 1 4 8 1 4 151 5 121 5 131 5 151 6 9 1 6 101 6 111 6 141 7 9 1 7 101 7 111 7 141 8 141 9 121 9 131 10121 10131 [ exclusions ] 12 3 4 5 14 21 3 31 2 4 5 6 7 8 14 15 41 3 5 14 51 3 4 6 7 8 9 10 11 14 6 3 5 7 8 7 3 5 6 8 8 3 5 6 7 9 10 11 12 13 9 5 8 10 11 105 8 9 11 115 8 9 10 12 13 128 11 13 138 11 12 141 3 4 5 15 153 14 [ system ] sys [ molecules ] ; Compoundnmols sys 1 I have ommited [Bond] [angle] and [dihedral] section because they aren't neccesary for my question. Now the .gro 15 1 xxx AA1 0.3326 0.1548 -0. 1 xxx AB2 0.3909 0.0724 -0. 1 xxx AC3 0.3970 0.2846 -0. 1 xxx AD4 0.3672 0.3400 -0.0890 1 xxx AC5 0.3577 0.3654 0.1232 1 xxx AE6 0.2497 0.3801 0.1241 1 xxx AE7 0.3877 0.3116 0.2131 1 xxx AC8 0.4267 0.4996 0.1195 1 xxx AE9 0.5347 0.4850 0.1186 1 xxx AE 10 0.3967 0.5535 0.0296 1 xxx AF 11 0.3874 0.5805 0.2429 1 xxx AG 12 0.4595 0.5679 0.3454 1 xxx AG 13 0.2856 0.6542 0.2334 1 xxx AF 14 0.5486 0.2705 -0. 1 xxx AG 15 0.6009 0.1593 -0. 20.00 20.00 20.00 and the SPE.mdp dt = 0.001000 gen-vel = no gen-temp = 0.00 pbc = xyz integrator = md nsteps = 0 constraints = none constraint-algorithm = SHAKE rlist = 1.20 rcoulomb= 1.20 rvdw= 1.20
Re: [gmx-users] Doubt about energies in a very simple system
Hi Justin, I always do the SPE as follows: grompp -f SPE.mdp -p sys.top -c sys.gro and after that I simply execute mdrun, i didn't know about the mdrun -rerun function. Now I have done: mdrun -s topol.tpr -rerun sys.gro but the energy results are the exactly the same. Thank you. 2015-02-24 19:30 GMT+01:00 Justin Lemkul jalem...@vt.edu: On 2/24/15 11:42 AM, IÑIGO SAENZ wrote: Hi, I designed a very simple system that is composed of only one Glutamine with Tleap. I've transformed the corresponding .prmtop and .inpcrd into .top and .gro files, using a conversor that I'm developing. I attach .top .gro and .mdp files [ defaults ] ; nbfunccomb-rule gen-pairs fudgeLJ fudgeQQ 12yes0.50.8 [ atomtypes ] ;name bond_type mass charge ptype sigma epsilon Amb AA AA 0.0 0.0 A 3.25000e-01 7.11280e-01 ; 1.82 0.1700 AB AB 0.0 0.0 A 1.06908e-01 6.56888e-02 ; 0.60 0.0157 AC AC 0.0 0.0 A 3.39967e-01 4.57730e-01 ; 1.91 0.1094 AD AD 0.0 0.0 A 2.47135e-01 6.56888e-02 ; 1.39 0.0157 AE AE 0.0 0.0 A 2.64953e-01 6.56888e-02 ; 1.49 0.0157 AF AF 0.0 0.0 A 3.39967e-01 3.59824e-01 ; 1.91 0.0860 AG AG 0.0 0.0 A 2.95992e-01 8.78640e-01 ; 1.66 0.2100 [ moleculetype ] ;namenrexcl sys 0 [ atoms ] ; nr type resi res atom cgnr charge mass ; qtot 1 AA 1 xxxAA1-0.516300 14.01000 ; -0.516 2 AB 1 xxxAB2 0.293600 1.00800 ; -0.223 3 AC 1 xxxAC3 0.039700 12.01000 ; -0.183 4 AD 1 xxxAD4 0.110500 1.00800 ; -0.073 5 AC 1 xxxAC5 0.056000 12.01000 ; -0.017 6 AE 1 xxxAE6-0.017300 1.00800 ; -0.034 7 AE 1 xxxAE7-0.017300 1.00800 ; -0.051 8 AC 1 xxxAC8 0.013600 12.01000 ; -0.038 9 AE 1 xxxAE9-0.042500 1.00800 ; -0.080 10 AE 1 xxxAE 10-0.042500 1.00800 ; -0.123 11 AF 1 xxxAF 11 0.805400 12.01000 ; 0.683 12 AG 1 xxxAG 12-0.818800 16.0 ; -0.136 13 AG 1 xxxAG 13-0.818800 16.0 ; -0.955 14 AF 1 xxxAF 14 0.536600 12.01000 ; -0.418 15 AG 1 xxxAG 15-0.581900 16.0 ; -1.000 [ pairs ] 1 6 1 1 7 1 1 8 1 1 151 2 4 1 2 5 1 2 141 3 9 1 3 101 3 111 4 6 1 4 7 1 4 8 1 4 151 5 121 5 131 5 151 6 9 1 6 101 6 111 6 141 7 9 1 7 101 7 111 7 141 8 141 9 121 9 131 10121 10131 [ exclusions ] 12 3 4 5 14 21 3 31 2 4 5 6 7 8 14 15 41 3 5 14 51 3 4 6 7 8 9 10 11 14 6 3 5 7 8 7 3 5 6 8 8 3 5 6 7 9 10 11 12 13 9 5 8 10 11 105 8 9 11 115 8 9 10 12 13 128 11 13 138 11 12 141 3 4 5 15 153 14 [ system ] sys [ molecules ] ; Compoundnmols sys 1 I have ommited [Bond] [angle] and [dihedral] section because they aren't neccesary for my question. Now the .gro 15 1 xxx AA1 0.3326 0.1548 -0. 1 xxx AB2 0.3909 0.0724 -0. 1 xxx AC3 0.3970 0.2846 -0. 1 xxx AD4 0.3672 0.3400 -0.0890 1 xxx AC5 0.3577 0.3654 0.1232 1 xxx AE6 0.2497 0.3801 0.1241 1 xxx AE7 0.3877 0.3116 0.2131 1 xxx AC8 0.4267 0.4996 0.1195 1 xxx AE9 0.5347 0.4850 0.1186 1 xxx AE 10 0.3967 0.5535 0.0296 1 xxx AF 11 0.3874 0.5805 0.2429 1 xxx AG 12 0.4595 0.5679 0.3454 1 xxx AG 13 0.2856 0.6542 0.2334 1 xxx AF 14 0.5486 0.2705 -0. 1 xxx AG 15 0.6009 0.1593 -0. 20.00 20.00 20.00 and the SPE.mdp dt = 0.001000 gen-vel = no gen-temp = 0.00 pbc = xyz integrator = md nsteps = 0 constraints = none constraint-algorithm = SHAKE rlist = 1.20 rcoulomb= 1.20 rvdw= 1.20 coulombtype = Cut-off vdwtype = Cut-off vdw-modifier = None coulomb-modifier = None nstlog = 1 nstenergy = 1 nstcalcenergy = 1 cutoff-scheme = Verlet comm-mode = Linear nstcomm= 1
Re: [gmx-users] On the collective dynamics in terms of NMA and PCA
Hi James, Imagine you have a landscape with a long valley, longer than it's broad. Now you go to the deepest point and determine the shallowest direction. Then you take the shallowest direction perpendicular to the first. That's NMA. Now you stand before the valley, and you roll a big ball down, which has the property of maintaining its kinetic energy. It goes down, and past the lowest point and rolls up the other end, comes back, and so forth. After some time you take all the positions of the ball, determine the mean and then determine the axis of the largest spread in the points. Then you determine the axis of the largest spread perpendicular to the first. That's PCA. You see that both give the same result, which is because they both reflect the same underlying landscape. I hope this is clear. Otherwise, let me know. Cheers, Tsjerk On Tue, Feb 24, 2015 at 11:20 AM, James Starlight jmsstarli...@gmail.com wrote: Dear Gromacs Users! I have a question regarding calculation of the collective dynamics using normal mode analysis and principal components analysis made in case when 1) NMA was performed just based on one reference structure and 2) PCA was performed for the md trajectory where each frame has been superimposed against that reference structure. Eventually I've found good correlations (which means that it has the same directions) in the lowest-frequency modes from 1) and first PCs made for 2) as was obtained by means of dot product of both eigenvectors sets. Could someone explain me briefly why such correlation is exist? I just know that cov.matrix is correspond to the inverse of the hessian but I don't understand physical meaning of that fact. Thanks for help, James -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Tsjerk A. Wassenaar, Ph.D. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Application of electric field on membrane
Dear Developers,I am new to GromacsI have installed Gromacs4.5.5. I have tried the membrane tutorial..I want to apply intense pulsed electric field with 100ps rise time...How do i run the simulation.Have to modify md.mdp file before production md simulation,,,Kindly help me to run the simulation. With Regards, A.Petrishia Department of ECE, College of Engineering,Guindy, Anna University, Chennai-600025 9444689919 -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Gromacs in windows 7 with GPU
On 24.02.2015 05:08, 라지브간디 wrote: I have a Windows 7 OS system which has i7, CPU @ 3.40GHz, 16 GB RAM and newly installed NVIDIA GeForce GX960 with 1 TB memory. Since i am familiar with gromacs in Linux system, i am not able to install in windows environment. Should i use cygwin or Visual Studio ? Though i tried Visual Studio using cmake (gui) but couldn't get success ( errors occurs while linking fftw3 or mkl). Both, Visual Studio and Cygwin compilations work fine. With Visual Studio, you are able to link against CUDA and use the GPU. To link fftw3f under Visual Studio, you have to build fftw3 first (or use the binaries available there, which are, to my knowlegde, also multithreaded but not sse2 optimized http://www.fftw.org/install/windows.html but should work fine anyway). For Gromacs compilation under windows, don't bother with mkl, just use the above fftw3 libraries which you prepared according to the instructions there. extract the source tree into, eg. somewhere/gromacs-4.6.7/ Then, open a Visual Studio x64 development Prompt (inportant) and change into your empty build directory. cd somewhere/gromacs-build issue cmake (put the command below into cmake.cmd or similar): cmake -G Visual Studio 12 Win64 ^ -DCMAKE_INSTALL_PREFIX=D:/Gromacs467 ^ -DCMAKE_PREFIX_PATH=D:/Usr/x64 ^ -DGMX_PREFER_STATIC_LIBS=ON^ -DFFTWF_LIBRARY=D:/Usr/x64/lib/libfftwf-3.3.lib ^ -DGMX_GPU=ON ^ ..\gromacs-4.6.7 Then, if the above step went smooth, issue: devenv Gromacs.sln /build Release^ /project ALL_BUILD /projectconfig Release ^ /project INSTALL in the build directory (the ^ are line continuations) Regards M. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] On the collective dynamics in terms of NMA and PCA
Hi Tsjerk, thank you very much for the explanation! So in that case the choosing of the best method in relation to specified task will depend on the sensitivity of both methods. In case of NMA it should be based on the knowing that initial structure is lied within the deepest minima along all its possible states from the energy landscape = it means that we start to looking on the softest transition pathways exactly from this point. On other hand in case of PCA the results should be depends on full coverage of the analyzed trajectory trajectory - it means that the rolling ball visit all possible states along its pathway. Does it correct? In any case It's not quite understand for me why the directions of the first PCs (most collective dynamics) should be at the same time more softest (less-energy consuming pathways). Thanks for suggestions again! James 2015-02-24 11:42 GMT+01:00 Tsjerk Wassenaar tsje...@gmail.com: Oh, there's a minor correction to NMA. You actually determine the steepest gradient first, and then the steepest gradient perpendicular to the first direction. That's why the smallest normal modes correspond to the largest eigenvectors :) Cheers, Tsjerk On Tue, Feb 24, 2015 at 11:31 AM, Tsjerk Wassenaar tsje...@gmail.com wrote: Hi James, Imagine you have a landscape with a long valley, longer than it's broad. Now you go to the deepest point and determine the shallowest direction. Then you take the shallowest direction perpendicular to the first. That's NMA. Now you stand before the valley, and you roll a big ball down, which has the property of maintaining its kinetic energy. It goes down, and past the lowest point and rolls up the other end, comes back, and so forth. After some time you take all the positions of the ball, determine the mean and then determine the axis of the largest spread in the points. Then you determine the axis of the largest spread perpendicular to the first. That's PCA. You see that both give the same result, which is because they both reflect the same underlying landscape. I hope this is clear. Otherwise, let me know. Cheers, Tsjerk On Tue, Feb 24, 2015 at 11:20 AM, James Starlight jmsstarli...@gmail.com wrote: Dear Gromacs Users! I have a question regarding calculation of the collective dynamics using normal mode analysis and principal components analysis made in case when 1) NMA was performed just based on one reference structure and 2) PCA was performed for the md trajectory where each frame has been superimposed against that reference structure. Eventually I've found good correlations (which means that it has the same directions) in the lowest-frequency modes from 1) and first PCs made for 2) as was obtained by means of dot product of both eigenvectors sets. Could someone explain me briefly why such correlation is exist? I just know that cov.matrix is correspond to the inverse of the hessian but I don't understand physical meaning of that fact. Thanks for help, James -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Tsjerk A. Wassenaar, Ph.D. -- Tsjerk A. Wassenaar, Ph.D. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
Hey Harry, Thanks for the caveat. Carsten Kutzner posted these results a few days ago. This is what he said : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. I know for sure gromacs escalates VERY well on 4 x 16 cores latests AMD (Interlagos, Bulldozer, etc.) machines. But have no experience with Intel Xeon. Let's see what others can say. BR, D 2015-02-24 13:17 GMT+01:00 Harry Mark Greenblatt harry.greenbl...@weizmann.ac.il: BSD Dear David, We did some tests with Gromacs and other programs on CPU's with core counts up to 16 per socket, and found that after about 12 cores jobs/threads begin to interfere with each other. In other words there was a performace penalty when using core counts above 12. I don't have the details in front of me, but you should at the very least get a test machine and try running your simulations for short periods with 10, 12, 14, 16 and 18 cores in use to see how Gromacs behaves with these processors (unless someone has done these tests, and can confirm that Gromacs has no issues with 16 or 18 core cpu's). Harry On Feb 24, 2015, at 1:32 PM, David McGiven wrote: Hi Szilard, Thank you very much for your great advice. 2015-02-20 19:03 GMT+01:00 Szilárd Páll pall.szil...@gmail.commailto: pall.szil...@gmail.com: On Fri, Feb 20, 2015 at 2:17 PM, David McGiven davidmcgiv...@gmail.com mailto:davidmcgiv...@gmail.com wrote: Dear Gromacs users and developers, We are thinking about buying a new cluster of ten or twelve 1U/2U machines with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series. Not yet clear the details, we'll see. If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Get IVB (v2) if it saves you a decent amount of money compared to v3. The AVX2 with FMA of the Haswell chips is great, but if you run GROMACS with GPUs on them my guess is that a higher frequency v2 will be more advantageous than the v3's AVX2 support. Won't swear on this as I have not tested thoroughly. According to an email exchange I had with Carsten Kutzner, for the kind of simulations we would like to run (see below), lower frequency v2's give better performance-to-price ratio. For instance, we can get from a national reseller : 2U server (supermicro rebranded I guess) 2 x E5-2699V3 18c 2,3Ghz 64 GB DDR4 2 x GTX980 (certified for the server) - 13.400 EUR (sans VAT) 2U server (supermicro rebranded I guess) 2 x E5-2695V2 12c 2,4 Ghz 64 GB DDR3 2 x GTX980 (certified for the server) - 9.140 EUR (sans VAT) Does that qualify as saving a decent amount of money to go for the V2 ? I don't think so, also because we care about rack space. Less servers but potent ones. The latests haswells are way too overpriced for us. We want to run molecular dynamics simulations of transmembrane proteins inside a POPC lipid bilayer, in a system with ~10 atoms, from which almost 1/3 correspond to water molecules and employing usual conditions with PME for electorstatics and cutoffs for LJ interactions. I think we'll go for the V3 version. I've been told in this list that NVIDIA GTX offer the best performance/price ratio for gromacs 5.0. Yes, that is the case. However, I am wondering ... How do you guys use the GTX cards in rackable servers ? GTX cards are consummer grade, for personal workstations, gaming, and so on and it's nearly impossible to find any servers manufacturer like HP, Dell, SuperMicro, etc. to certify that those cards will function properly on their servers. Certification can be an issue - unless you buy many and you can cut a deal with a company. There are some companies that do certify servers, but AFAIK most/all are US-based. I won't do public a long advertisement here, but you can find many names if you browse NVIDIA's GPU computing site (and as a matter of fact the AMBER GPU site is quite helpful in this respect too). You can consider getting vanilla server nodes and plug the GTX cards in yourself. In general, I can recommend Supermicro, they have pretty good value servers from 1 to 4U. The easiest is to use the latter because GTX cards will just fit vertically, but it will be a serious waste of rack-space. With a bit of tinkering you may be able to get GTX cards into 3U, but you'll either need cards with connectors on
Re: [gmx-users] On the collective dynamics in terms of NMA and PCA
Oh, there's a minor correction to NMA. You actually determine the steepest gradient first, and then the steepest gradient perpendicular to the first direction. That's why the smallest normal modes correspond to the largest eigenvectors :) Cheers, Tsjerk On Tue, Feb 24, 2015 at 11:31 AM, Tsjerk Wassenaar tsje...@gmail.com wrote: Hi James, Imagine you have a landscape with a long valley, longer than it's broad. Now you go to the deepest point and determine the shallowest direction. Then you take the shallowest direction perpendicular to the first. That's NMA. Now you stand before the valley, and you roll a big ball down, which has the property of maintaining its kinetic energy. It goes down, and past the lowest point and rolls up the other end, comes back, and so forth. After some time you take all the positions of the ball, determine the mean and then determine the axis of the largest spread in the points. Then you determine the axis of the largest spread perpendicular to the first. That's PCA. You see that both give the same result, which is because they both reflect the same underlying landscape. I hope this is clear. Otherwise, let me know. Cheers, Tsjerk On Tue, Feb 24, 2015 at 11:20 AM, James Starlight jmsstarli...@gmail.com wrote: Dear Gromacs Users! I have a question regarding calculation of the collective dynamics using normal mode analysis and principal components analysis made in case when 1) NMA was performed just based on one reference structure and 2) PCA was performed for the md trajectory where each frame has been superimposed against that reference structure. Eventually I've found good correlations (which means that it has the same directions) in the lowest-frequency modes from 1) and first PCs made for 2) as was obtained by means of dot product of both eigenvectors sets. Could someone explain me briefly why such correlation is exist? I just know that cov.matrix is correspond to the inverse of the hessian but I don't understand physical meaning of that fact. Thanks for help, James -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Tsjerk A. Wassenaar, Ph.D. -- Tsjerk A. Wassenaar, Ph.D. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
Hi Szilard, Thank you very much for your great advice. 2015-02-20 19:03 GMT+01:00 Szilárd Páll pall.szil...@gmail.com: On Fri, Feb 20, 2015 at 2:17 PM, David McGiven davidmcgiv...@gmail.com wrote: Dear Gromacs users and developers, We are thinking about buying a new cluster of ten or twelve 1U/2U machines with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series. Not yet clear the details, we'll see. If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Get IVB (v2) if it saves you a decent amount of money compared to v3. The AVX2 with FMA of the Haswell chips is great, but if you run GROMACS with GPUs on them my guess is that a higher frequency v2 will be more advantageous than the v3's AVX2 support. Won't swear on this as I have not tested thoroughly. According to an email exchange I had with Carsten Kutzner, for the kind of simulations we would like to run (see below), lower frequency v2's give better performance-to-price ratio. For instance, we can get from a national reseller : 2U server (supermicro rebranded I guess) 2 x E5-2699V3 18c 2,3Ghz 64 GB DDR4 2 x GTX980 (certified for the server) - 13.400 EUR (sans VAT) 2U server (supermicro rebranded I guess) 2 x E5-2695V2 12c 2,4 Ghz 64 GB DDR3 2 x GTX980 (certified for the server) - 9.140 EUR (sans VAT) Does that qualify as saving a decent amount of money to go for the V2 ? I don't think so, also because we care about rack space. Less servers but potent ones. The latests haswells are way too overpriced for us. We want to run molecular dynamics simulations of transmembrane proteins inside a POPC lipid bilayer, in a system with ~10 atoms, from which almost 1/3 correspond to water molecules and employing usual conditions with PME for electorstatics and cutoffs for LJ interactions. I think we'll go for the V3 version. I've been told in this list that NVIDIA GTX offer the best performance/price ratio for gromacs 5.0. Yes, that is the case. However, I am wondering ... How do you guys use the GTX cards in rackable servers ? GTX cards are consummer grade, for personal workstations, gaming, and so on and it's nearly impossible to find any servers manufacturer like HP, Dell, SuperMicro, etc. to certify that those cards will function properly on their servers. Certification can be an issue - unless you buy many and you can cut a deal with a company. There are some companies that do certify servers, but AFAIK most/all are US-based. I won't do public a long advertisement here, but you can find many names if you browse NVIDIA's GPU computing site (and as a matter of fact the AMBER GPU site is quite helpful in this respect too). You can consider getting vanilla server nodes and plug the GTX cards in yourself. In general, I can recommend Supermicro, they have pretty good value servers from 1 to 4U. The easiest is to use the latter because GTX cards will just fit vertically, but it will be a serious waste of rack-space. With a bit of tinkering you may be able to get GTX cards into 3U, but you'll either need cards with connectors on the back or 90 deg angled 4-pin PCIE power cables. Otherwise you can only fit the cards with PCIE raisers and I have no experience with that setup, but I know some build denser machines with GTX cards. Cheer, -- Szilárd What are your views about this ? Thanks. Best Regards -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
BSD Dear David, Hey Harry, Thanks for the caveat. Carsten Kutzner posted these results a few days ago. This is what he said : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. Yes, the 2680v2 is a 10 core processor. The interference I mentioned is not apparent across sockets, only within. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Perhaps he has seen some real results that do not show issues at 16 or 18 cores/socket, in which case they would be advantageous, if one can afford them. I am only going on what the manager of our cluster mentioned to me in his tests. But his tests were based on many different software packages, so perhaps Gromacs is less/not affected. Harry - Harry M. Greenblatt Associate Staff Scientist Dept of Structural Biology Weizmann Institute of SciencePhone: 972-8-934-3625 234 Herzl St.Facsimile: 972-8-934-4159 Rehovot, 76100 Israel harry.greenbl...@weizmann.ac.ilmailto:harry.greenbl...@weizmann.ac.il -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
2015-02-24 15:46 GMT+01:00 Szilárd Páll pall.szil...@gmail.com: Perhaps he has seen some real results that do not show issues at 16 or 18 cores/socket, in which case they would be advantageous, if one can afford them. I am only going on what the manager of our cluster mentioned to me in his tests. But his tests were based on many different software packages, so perhaps Gromacs is less/not affected. OK, that's an entirely different claim than the one you made initially. I dare to say that it is dangerous to mix performance observations of many software packages - especially with that of GROMACS. Totally agree. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Announcement: release of FESetup 1.1
Dear Gromacs community, We are pleased to announce the release of FESetup 1.1. FESetup is a tool to automate setup of alchemical relative free energy simulations like thermodynamic integration (TI) and can also be used for general simulation setup (equilibration). The tool will automatically parameterise ligands (AM1/BCC) and map all atoms of each mutational pair (for codes supporting the single topology paradigm). Supported molecular simulation packages implementing free energy simulation are currently GROMACS, AMBER and Sire (all these codes are hybrid single/dual topology). General simulation setup through an abstract MD engine is available for AMBER, GROMACS, NAMD and DL_POLY. Supported force fields are all modern AMBER force fields including GAFF. Future plans include extending the code to support other popular biomolecular simulation software, additional force fields and parameterisation schemes. We particularly aim at automatisation where it makes sense and is possible, ease of use and robustness of the code. Please find the software at http://www.hecbiosim.ac.uk/repo/download/2-software/3-fesetup Kind regards, Hannes Loeffler. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
Le 24/02/2015 13:29, David McGiven a écrit : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. I know for sure gromacs escalates VERY well on 4 x 16 cores latests AMD (Interlagos, Bulldozer, etc.) machines. But have no experience with Intel Xeon. My experience with latest gromacs and fftw build on my machine is that one should not consider the hyperthreaded cores , but only the real cores. My system has 24 cores (E5-2620 v2 @ 2.10GHz + NVIDIA K4000), but really only 12 real cores. Using pin, running only one test system with optimized conditions I used the benchmarks available at the gromacs web site (ADH, rnase, villin, http://www.gromacs.org/GPU_acceleration), My results were : *** rnase_cubic 45,75 ns/day with -nt 6 and gpu on 47,10 ns/day with -nt 12 and gpu on 27,66 ns/day with -nt 24 and gpu on 35,31 ns/day with -nt 12 and gpu off 21,37 ns/day with -nt 24 and gpu off The results are more or less similar in the other benchmarks, 6 cores + GPU close to 12 cores + GPU, and faster than 24 cores... The difference in the GPU case is the aveage GPU usage, which is more than 85 % during the tests runs when not all processors are in use while it drops to 50 % if all cores are in use (using a rough observation of the GPU usage using nvidia-smi-tool). I have no explanation for the CPU-only benchmarked though, since I have enabled or disabled pinning, ensured that only one job was running at a time, etc. I have not played a lot with -nt, either omp or mpi, since this machine is a single node. Hope this helps in showing that more expensive may not be the way... Best, Stéphane -- Lecturer, UFIP, UMR 6286 CNRS, Team Protein Design In Silico UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes cedex 03, France Tél : +33 251 125 636 / Fax : +33 251 125 632 http://www.ufip.univ-nantes.fr/ - http://www.steletch.org -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] g_rdf
Dear Justin, Thanks. I have another question about coordination number ( the cumulative number ). I am looking for RDF between two type of atoms, first is Ca ions and the second group is Oxygen of side chains of peptide at solid/liquid interfaces. I don't understand the integration of RDF. I expect the number at first minimum of RDF might be around 2-3 but I got a very small around 0.1! Then I tried with -surf mol option, because all Oxygens consist of one molecule then the value increased to 1.6 ! I don't know the physics behind this normalization and integration. Could you explain for me? When we have two groups of atoms, the cumulative number is the average number of coordination between two groups of atoms? Regards, Leila On Mon, Feb 23, 2015 at 1:39 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/23/15 6:50 AM, leila salimi wrote: Dear all, I have a question regarding radial distribution function. Does g_rdf program work for non-cubic cell eg. triclinic? Yes. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 629 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/ Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
On Tue, Feb 24, 2015 at 2:09 PM, Harry Mark Greenblatt harry.greenbl...@weizmann.ac.il wrote: BSD Dear David, Hey Harry, Thanks for the caveat. Carsten Kutzner posted these results a few days ago. This is what he said : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. Yes, the 2680v2 is a 10 core processor. The interference I mentioned is not apparent across sockets, only within. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Perhaps he has seen some real results that do not show issues at 16 or 18 cores/socket, in which case they would be advantageous, if one can afford them. I am only going on what the manager of our cluster mentioned to me in his tests. But his tests were based on many different software packages, so perhaps Gromacs is less/not affected. OK, that's an entirely different claim than the one you made initially. I dare to say that it is dangerous to mix performance observations of many software packages - especially with that of GROMACS. However, again, if you have the data, please share it. -- Szilárd Harry - Harry M. Greenblatt Associate Staff Scientist Dept of Structural Biology Weizmann Institute of SciencePhone: 972-8-934-3625 234 Herzl St.Facsimile: 972-8-934-4159 Rehovot, 76100 Israel harry.greenbl...@weizmann.ac.ilmailto:harry.greenbl...@weizmann.ac.il -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] modified residues and residual charge
BSD Dear All, I was wondering how people deal with the issue of generating special amino acids in Gaussian, then being left with residual charge upon removing blocking blocks. The process I follow is: 1. Build the modified amino acid (side chain is non-standard) in Gaussian, with blocking groups on the main chain N and carbonyl carbon (acetyl, and -NHCH3, respectively) and run Gaussian. 2. Take resulting structure and feed into antechamber from AmberTools. One can use the charges from Gaussian, or allow antechamber to assign RESP or other charges (haven't tried the others). 3. Edit resulting mol2 file, remove the blocking groups and use parmchk and tleap to output Amber force field files. 4. Use acpype to convert to gromacs type files. 5. Modify a copy of the force field files to include this new amino acid type. The problem arises when removing the blocking groups, since one is left with residual fractional charge for the residue (about -0.1 in this case). I could simply divide up this charge among all the main chain atoms (add 0.025 to each), but perhaps there is a better way. Thanks Harry - Harry M. Greenblatt Associate Staff Scientist Dept of Structural Biology Weizmann Institute of SciencePhone: 972-8-934-3625 234 Herzl St.Facsimile: 972-8-934-4159 Rehovot, 76100 Israel harry.greenbl...@weizmann.ac.ilmailto:harry.greenbl...@weizmann.ac.il -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] How to perform simulation between initial and final structures
Dear All, I am working in a GTPase protein and I have two PDB structures of my protein in GTP-bound and GDP-bound state which have two different conformations. Now I want to perform simulation in between the structures to see how the conformational change is taking place. So is it possible to perform simulation keeping the initial and the final structure fixed. Please kindly provide me the procedure. Thank you in advance, -- Ananya Chatterjee, Senior Research Fellow (SRF), Department of biological Science, IISER-Kolkata. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
On Tue, Feb 24, 2015 at 12:32 PM, David McGiven davidmcgiv...@gmail.com wrote: Hi Szilard, Thank you very much for your great advice. 2015-02-20 19:03 GMT+01:00 Szilárd Páll pall.szil...@gmail.com: On Fri, Feb 20, 2015 at 2:17 PM, David McGiven davidmcgiv...@gmail.com wrote: Dear Gromacs users and developers, We are thinking about buying a new cluster of ten or twelve 1U/2U machines with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series. Not yet clear the details, we'll see. If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Get IVB (v2) if it saves you a decent amount of money compared to v3. The AVX2 with FMA of the Haswell chips is great, but if you run GROMACS with GPUs on them my guess is that a higher frequency v2 will be more advantageous than the v3's AVX2 support. Won't swear on this as I have not tested thoroughly. According to an email exchange I had with Carsten Kutzner, for the kind of simulations we would like to run (see below), lower frequency v2's give better performance-to-price ratio. That's quite likely the case. Plot the price vs #cores x base frequency and that will give you a reasonably good idea about _expected_ performance vs price. For instance, we can get from a national reseller : 2U server (supermicro rebranded I guess) 2 x E5-2699V3 18c 2,3Ghz 64 GB DDR4 2 x GTX980 (certified for the server) - 13.400 EUR (sans VAT) 2U server (supermicro rebranded I guess) 2 x E5-2695V2 12c 2,4 Ghz 64 GB DDR3 2 x GTX980 (certified for the server) - 9.140 EUR (sans VAT) Does that qualify as saving a decent amount of money to go for the V2 ? I don't think so, also because we care about rack space. Less servers but potent ones. The latests haswells are way too overpriced for us. Well, if you think that almost 50% extra cost is worth it, go for it! However, let me add a few notes/warnings: * The Xeon v3's clock is deceiving (borderline lie from Intel), in AVX mode those 2699V3-s run at around 1.9 GHz; at that point the difference between the two CPUs becomes quite likely =25% and if you'd take an E5-2697v2 which should be only a couple of 100s more than the 2695v2 the difference would likely become even less; * Instead of the E5-2699V3 I think you may be better off with the E5-2697 v3 - especially if both drop the clock by 400 MHz in AVX mode. We want to run molecular dynamics simulations of transmembrane proteins inside a POPC lipid bilayer, in a system with ~10 atoms, from which almost 1/3 correspond to water molecules and employing usual conditions with PME for electorstatics and cutoffs for LJ interactions. I think we'll go for the V3 version. Will be a sweet setup, let us know the performance when you have the machine! I've been told in this list that NVIDIA GTX offer the best performance/price ratio for gromacs 5.0. Yes, that is the case. However, I am wondering ... How do you guys use the GTX cards in rackable servers ? GTX cards are consummer grade, for personal workstations, gaming, and so on and it's nearly impossible to find any servers manufacturer like HP, Dell, SuperMicro, etc. to certify that those cards will function properly on their servers. Certification can be an issue - unless you buy many and you can cut a deal with a company. There are some companies that do certify servers, but AFAIK most/all are US-based. I won't do public a long advertisement here, but you can find many names if you browse NVIDIA's GPU computing site (and as a matter of fact the AMBER GPU site is quite helpful in this respect too). You can consider getting vanilla server nodes and plug the GTX cards in yourself. In general, I can recommend Supermicro, they have pretty good value servers from 1 to 4U. The easiest is to use the latter because GTX cards will just fit vertically, but it will be a serious waste of rack-space. With a bit of tinkering you may be able to get GTX cards into 3U, but you'll either need cards with connectors on the back or 90 deg angled 4-pin PCIE power cables. Otherwise you can only fit the cards with PCIE raisers and I have no experience with that setup, but I know some build denser machines with GTX cards. Cheer, -- Szilárd What are your views about this ? Thanks. Best Regards -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
On 2/24/15 8:09 AM, Harry Mark Greenblatt wrote: BSD Dear David, Hey Harry, Thanks for the caveat. Carsten Kutzner posted these results a few days ago. This is what he said : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. Yes, the 2680v2 is a 10 core processor. The interference I mentioned is not apparent across sockets, only within. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Perhaps he has seen some real results that do not show issues at 16 or 18 cores/socket, in which case they would be advantageous, if one can afford them. I am only going on what the manager of our cluster mentioned to me in his tests. But his tests were based on many different software packages, so perhaps Gromacs is less/not affected. When running multiple jobs simultaneously on multi-core nodes, was pinning done properly so the jobs don't interfere with one another? -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 629 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
On Tue, Feb 24, 2015 at 1:17 PM, Harry Mark Greenblatt harry.greenbl...@weizmann.ac.il wrote: BSD Dear David, We did some tests with Gromacs and other programs on CPU's with core counts up to 16 per socket, and found that after about 12 cores jobs/threads begin to interfere with each other. In other words there was a performace penalty when using core counts above 12. I don't have the details in front of me, but you should at the very least get a test machine and try running your simulations for short periods with 10, 12, 14, 16 and 18 cores in use to see how Gromacs behaves with these processors (unless someone has done these tests, and can confirm that Gromacs has no issues with 16 or 18 core cpu's). Please share the details because it is of our interest to understand and address such issues if they are reproducible. However, note, that I've ran on CPUs with up to 18 cores (and up to 36-96 threads per socket) and in most cases the multi-threaded code scales quite well - as long as not combined with DD/MPI. There are some known multi-threaded scaling issues that are beign addressed for 5.1, but without log files it's hard to know what is the nature of the performance penalty you mention. Note: HyperThreading and SMT in general changes the situation, but that's a different topic. -- Szilárd Harry On Feb 24, 2015, at 1:32 PM, David McGiven wrote: Hi Szilard, Thank you very much for your great advice. 2015-02-20 19:03 GMT+01:00 Szilárd Páll pall.szil...@gmail.commailto:pall.szil...@gmail.com: On Fri, Feb 20, 2015 at 2:17 PM, David McGiven davidmcgiv...@gmail.commailto:davidmcgiv...@gmail.com wrote: Dear Gromacs users and developers, We are thinking about buying a new cluster of ten or twelve 1U/2U machines with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series. Not yet clear the details, we'll see. If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. Get IVB (v2) if it saves you a decent amount of money compared to v3. The AVX2 with FMA of the Haswell chips is great, but if you run GROMACS with GPUs on them my guess is that a higher frequency v2 will be more advantageous than the v3's AVX2 support. Won't swear on this as I have not tested thoroughly. According to an email exchange I had with Carsten Kutzner, for the kind of simulations we would like to run (see below), lower frequency v2's give better performance-to-price ratio. For instance, we can get from a national reseller : 2U server (supermicro rebranded I guess) 2 x E5-2699V3 18c 2,3Ghz 64 GB DDR4 2 x GTX980 (certified for the server) - 13.400 EUR (sans VAT) 2U server (supermicro rebranded I guess) 2 x E5-2695V2 12c 2,4 Ghz 64 GB DDR3 2 x GTX980 (certified for the server) - 9.140 EUR (sans VAT) Does that qualify as saving a decent amount of money to go for the V2 ? I don't think so, also because we care about rack space. Less servers but potent ones. The latests haswells are way too overpriced for us. We want to run molecular dynamics simulations of transmembrane proteins inside a POPC lipid bilayer, in a system with ~10 atoms, from which almost 1/3 correspond to water molecules and employing usual conditions with PME for electorstatics and cutoffs for LJ interactions. I think we'll go for the V3 version. I've been told in this list that NVIDIA GTX offer the best performance/price ratio for gromacs 5.0. Yes, that is the case. However, I am wondering ... How do you guys use the GTX cards in rackable servers ? GTX cards are consummer grade, for personal workstations, gaming, and so on and it's nearly impossible to find any servers manufacturer like HP, Dell, SuperMicro, etc. to certify that those cards will function properly on their servers. Certification can be an issue - unless you buy many and you can cut a deal with a company. There are some companies that do certify servers, but AFAIK most/all are US-based. I won't do public a long advertisement here, but you can find many names if you browse NVIDIA's GPU computing site (and as a matter of fact the AMBER GPU site is quite helpful in this respect too). You can consider getting vanilla server nodes and plug the GTX cards in yourself. In general, I can recommend Supermicro, they have pretty good value servers from 1 to 4U. The easiest is to use the latter because GTX cards will just fit vertically, but it will be a serious waste of rack-space. With a bit of tinkering you may be able to get GTX cards into 3U, but you'll either need cards with connectors on the back or 90 deg angled 4-pin PCIE power cables. Otherwise you can only fit the cards with PCIE raisers and I have no experience with that setup, but I know some build denser machines with GTX cards. Cheer, -- Szilárd What are your views about this ? Thanks. Best Regards -- Gromacs Users mailing list *
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
On Tue, Feb 24, 2015 at 3:44 PM, Téletchéa Stéphane stephane.teletc...@univ-nantes.fr wrote: Le 24/02/2015 13:29, David McGiven a écrit : I never benchmarked 64-core AMD nodes with GPUs. With a 80 k atoms test system using a 2 fs time step I get 24 ns/d on 64 AMD cores 6272 16 ns/d on 32 AMD cores 6380 36 ns/d on 32 AMD cores 6380 with 1x GTX 980 40 ns/d on 32 AMD cores 6380 with 2x GTX 980 27 ns/d on 20 Intel cores 2680v2 52 ns/d on 20 Intel cores 2680v2 with 1x GTX 980 62 ns/d on 20 Intel cores 2680v2 with 2x GTX 980 I think 20 Intel cores means 2 x 10 cores each. But Szilard just mentioned in this same thread : If you can afford them get the 14/16 or 18 core v3 Haswells, those are *really* fast, but a pair can cost as much as a decent car. I know for sure gromacs escalates VERY well on 4 x 16 cores latests AMD (Interlagos, Bulldozer, etc.) machines. But have no experience with Intel Xeon. My experience with latest gromacs and fftw build on my machine is that one should not consider the hyperthreaded cores , but only the real cores. My system has 24 cores (E5-2620 v2 @ 2.10GHz + NVIDIA K4000), but really only 12 real cores. Using pin, running only one test system with optimized conditions I used the benchmarks available at the gromacs web site (ADH, rnase, villin, http://www.gromacs.org/GPU_acceleration), My results were : *** rnase_cubic 45,75 ns/day with -nt 6 and gpu on 47,10 ns/day with -nt 12 and gpu on 27,66 ns/day with -nt 24 and gpu on 35,31 ns/day with -nt 12 and gpu off 21,37 ns/day with -nt 24 and gpu off The results are more or less similar in the other benchmarks, 6 cores + GPU close to 12 cores + GPU, and faster than 24 cores... The difference in the GPU case is the aveage GPU usage, which is more than 85 % during the tests runs when not all processors are in use while it drops to 50 % if all cores are in use (using a rough observation of the GPU usage using nvidia-smi-tool). I have no explanation for the CPU-only benchmarked though, since I have enabled or disabled pinning, ensured that only one job was running at a time, etc. I have not played a lot with -nt, either omp or mpi, since this machine is a single node. Hope this helps in showing that more expensive may not be the way... Thanks! Let me note that those observations are particular to your machine. There are multiple factors that cumulatively affect the multi-threaded scaling: - physical vs HT threads - crossing socket boundaries - iteration time/data per thread - GPU and GPU performance In your case all these three factors are somewhat disadvantageous for good scaling. You have two sockets so your runs are crossing CPU socket boundaries. The input is quite small and with GPUs the HyperThreading disadvatages can increase - especially with a slow GPU. Also note: - your Quadro 4000 can likely not keep up with the 12 CPU cores and there is probably some Wait GPU time (see log file) - if you want to test 1 CPU + 1 GPU using HT vs not using it you should run make sure to run with -pinstride 1 -ntomp 12 in the latter case! - -nt is partially deprecated/backward compatibility flag and should only be used if its meaning is use this many tMPI or OpenMP threads and decide which one is better, which is not the case here! Cheers, Sz. Best, Stéphane -- Lecturer, UFIP, UMR 6286 CNRS, Team Protein Design In Silico UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes cedex 03, France Tél : +33 251 125 636 / Fax : +33 251 125 632 http://www.ufip.univ-nantes.fr/ - http://www.steletch.org -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Doubt about energies in a very simple system
Hi, I designed a very simple system that is composed of only one Glutamine with Tleap. I've transformed the corresponding .prmtop and .inpcrd into .top and .gro files, using a conversor that I'm developing. I attach .top .gro and .mdp files [ defaults ] ; nbfunccomb-rule gen-pairs fudgeLJ fudgeQQ 12yes0.50.8 [ atomtypes ] ;name bond_type mass charge ptype sigma epsilon Amb AA AA 0.0 0.0 A 3.25000e-01 7.11280e-01 ; 1.82 0.1700 AB AB 0.0 0.0 A 1.06908e-01 6.56888e-02 ; 0.60 0.0157 AC AC 0.0 0.0 A 3.39967e-01 4.57730e-01 ; 1.91 0.1094 AD AD 0.0 0.0 A 2.47135e-01 6.56888e-02 ; 1.39 0.0157 AE AE 0.0 0.0 A 2.64953e-01 6.56888e-02 ; 1.49 0.0157 AF AF 0.0 0.0 A 3.39967e-01 3.59824e-01 ; 1.91 0.0860 AG AG 0.0 0.0 A 2.95992e-01 8.78640e-01 ; 1.66 0.2100 [ moleculetype ] ;namenrexcl sys 0 [ atoms ] ; nr type resi res atom cgnr charge mass ; qtot 1 AA 1 xxxAA1-0.516300 14.01000 ; -0.516 2 AB 1 xxxAB2 0.293600 1.00800 ; -0.223 3 AC 1 xxxAC3 0.039700 12.01000 ; -0.183 4 AD 1 xxxAD4 0.110500 1.00800 ; -0.073 5 AC 1 xxxAC5 0.056000 12.01000 ; -0.017 6 AE 1 xxxAE6-0.017300 1.00800 ; -0.034 7 AE 1 xxxAE7-0.017300 1.00800 ; -0.051 8 AC 1 xxxAC8 0.013600 12.01000 ; -0.038 9 AE 1 xxxAE9-0.042500 1.00800 ; -0.080 10 AE 1 xxxAE 10-0.042500 1.00800 ; -0.123 11 AF 1 xxxAF 11 0.805400 12.01000 ; 0.683 12 AG 1 xxxAG 12-0.818800 16.0 ; -0.136 13 AG 1 xxxAG 13-0.818800 16.0 ; -0.955 14 AF 1 xxxAF 14 0.536600 12.01000 ; -0.418 15 AG 1 xxxAG 15-0.581900 16.0 ; -1.000 [ pairs ] 1 6 1 1 7 1 1 8 1 1 151 2 4 1 2 5 1 2 141 3 9 1 3 101 3 111 4 6 1 4 7 1 4 8 1 4 151 5 121 5 131 5 151 6 9 1 6 101 6 111 6 141 7 9 1 7 101 7 111 7 141 8 141 9 121 9 131 10121 10131 [ exclusions ] 12 3 4 5 14 21 3 31 2 4 5 6 7 8 14 15 41 3 5 14 51 3 4 6 7 8 9 10 11 14 6 3 5 7 8 7 3 5 6 8 8 3 5 6 7 9 10 11 12 13 9 5 8 10 11 105 8 9 11 115 8 9 10 12 13 128 11 13 138 11 12 141 3 4 5 15 153 14 [ system ] sys [ molecules ] ; Compoundnmols sys 1 I have ommited [Bond] [angle] and [dihedral] section because they aren't neccesary for my question. Now the .gro 15 1 xxx AA1 0.3326 0.1548 -0. 1 xxx AB2 0.3909 0.0724 -0. 1 xxx AC3 0.3970 0.2846 -0. 1 xxx AD4 0.3672 0.3400 -0.0890 1 xxx AC5 0.3577 0.3654 0.1232 1 xxx AE6 0.2497 0.3801 0.1241 1 xxx AE7 0.3877 0.3116 0.2131 1 xxx AC8 0.4267 0.4996 0.1195 1 xxx AE9 0.5347 0.4850 0.1186 1 xxx AE 10 0.3967 0.5535 0.0296 1 xxx AF 11 0.3874 0.5805 0.2429 1 xxx AG 12 0.4595 0.5679 0.3454 1 xxx AG 13 0.2856 0.6542 0.2334 1 xxx AF 14 0.5486 0.2705 -0. 1 xxx AG 15 0.6009 0.1593 -0. 20.00 20.00 20.00 and the SPE.mdp dt = 0.001000 gen-vel = no gen-temp = 0.00 pbc = xyz integrator = md nsteps = 0 constraints = none constraint-algorithm = SHAKE rlist = 1.20 rcoulomb= 1.20 rvdw= 1.20 coulombtype = Cut-off vdwtype = Cut-off vdw-modifier = None coulomb-modifier = None nstlog = 1 nstenergy = 1 nstcalcenergy = 1 cutoff-scheme = Verlet comm-mode = Linear nstcomm= 1 continuation = yes My question is that when I execute a SPE of the original system in ACEMD, I obtain the following VDW and coulomb energies LJ 15.5754 KJ/Mol COULOMB 123.8486 KJ/Mol and in gromacs; LJ-14 14.3788 (kJ/mol) Coulomb-14 207.034 (kJ/mol) LJ (SR) 29.9432 (kJ/mol) Coulomb (SR)165.181 (kJ/mol) Question/Observation number 1: Gromacs LJ (SR) includes LJ-14 energies in its result, no? I mean, LJ (SR) = the energy of the LJ interactions of those
Re: [gmx-users] several runs
Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
On 2/24/15 1:44 PM, mah maz wrote: Is running simulations in several terminals problematic? Usually. Unless you keep them from interfering with each other with pinning, the performance will degrade badly. Though if you're doing multiple runs on a normal desktop, performance isn't going to be good anyway... -Justin On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 629 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] several runs
Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Application of electric field on membrane
On 2015-02-24 10:26, petrishia petrishia wrote: Dear Developers,I am new to GromacsI have installed Gromacs4.5.5. I have tried the membrane tutorial..I want to apply intense pulsed electric field with 100ps rise time...How do i run the simulation.Have to modify md.mdp file before production md simulation,,,Kindly help me to run the simulation. With Regards, A.Petrishia Department of ECE, College of Engineering,Guindy, Anna University, Chennai-600025 9444689919 This is implemented in gromacs but not well documented. You can check it here https://gerrit.gromacs.org/#/c/4458/ -- David van der Spoel, Ph.D., Professor of Biology Dept. of Cell Molec. Biol., Uppsala University. Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. sp...@xray.bmc.uu.sehttp://folding.bmc.uu.se -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?
Le 24/02/2015 17:18, Szilárd Páll a écrit : Thanks! Let me note that those observations are particular to your machine. There are multiple factors that cumulatively affect the multi-threaded scaling: - physical vs HT threads - crossing socket boundaries - iteration time/data per thread - GPU and GPU performance In your case all these three factors are somewhat disadvantageous for good scaling. You have two sockets so your runs are crossing CPU socket boundaries. The input is quite small and with GPUs the HyperThreading disadvatages can increase - especially with a slow GPU. Also note: - your Quadro 4000 can likely not keep up with the 12 CPU cores and there is probably some Wait GPU time (see log file) - if you want to test 1 CPU + 1 GPU using HT vs not using it you should run make sure to run with -pinstride 1 -ntomp 12 in the latter case! - -nt is partially deprecated/backward compatibility flag and should only be used if its meaning is use this many tMPI or OpenMP threads and decide which one is better, which is not the case here! Cheers, Sz. Dear Szilard, Thanks for the informations, this was a rapid bench, but I have all the logs if needed. I know this is bounded to my system and setup but if that can help others, I'd be happy in extending my tests with required parameters and adding them to the wiki if needed. Concerning the Wait GPU time, you are right, the numbers go from 8% to 72.4% ... Just let me know if you need more data and logs, I'd be happy to extend this benchmark to some other computers available here with variable setups and hardware (including amd too), to share on real cases what should be an optimal setting for performance/cpu best throughput. Best, Stéphane -- Lecturer, UFIP, UMR 6286 CNRS, Team Protein Design In Silico UFR Sciences et Techniques, 2, rue de la Houssinière, Bât. 25, 44322 Nantes cedex 03, France Tél : +33 251 125 636 / Fax : +33 251 125 632 http://www.ufip.univ-nantes.fr/ - http://www.steletch.org -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Computing Resource - Laptop
Hey everyone I'm not sure if this is the place to post this or not so my apologies if it is not. Our lab recently got some funds to put towards a desktop for molecular dynamics work and we have a budget of aprx. $4,000 CDN for the laptop. Given that I am not an expert in the hardware area, nor do I have a ton in the simulation area either, I wanted to see if there was any suggestions or resources or even experiences that this mailing list may have so that we can get the most out of our money. Primarily the computer will be used to run GROMACS and be used for analysis and some small scale simulation work. We do have access to supercomputing clusters which will serve as the primary resource for modelling. Thanks for your help in advance! -Douglas Grahame --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] GROMOS vs. CHARMM dihedrals
Hi all, I was wondering about interchangeability of proper dihedral parameters between GROMOS and CHARMM force fields. I am building a new, non-proteic residue, and I would like to use parameters from my GROMOS structure for the CHARMM simulation. Specifically, I would like to use the proper dihedral definitions from GROMOS in the CHARMM simulation. Is it OK to do so? Thank you all in advance, B. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] GROMOS vs. CHARMM dihedrals
On 2/24/15 1:19 PM, Bianca Villavicencio wrote: Hi all, I was wondering about interchangeability of proper dihedral parameters between GROMOS and CHARMM force fields. I am building a new, non-proteic residue, and I would like to use parameters from my GROMOS structure for the CHARMM simulation. Specifically, I would like to use the proper dihedral definitions from GROMOS in the CHARMM simulation. Is it OK to do so? No. Dihedrals, in general, are poorly transferable between similar chemical groups within a force field. Trying to transfer them between totally different force fields is unreasonable. -Justin -- == Justin A. Lemkul, Ph.D. Ruth L. Kirschstein NRSA Postdoctoral Fellow Department of Pharmaceutical Sciences School of Pharmacy Health Sciences Facility II, Room 629 University of Maryland, Baltimore 20 Penn St. Baltimore, MD 21201 jalem...@outerbanks.umaryland.edu | (410) 706-7441 http://mackerell.umaryland.edu/~jalemkul == -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Energy minimization for Inositol pyrophosphate
Hello, I have understood my mistakes that I reported in previous queries of this string. now, I am proceeding more systematically. I have generated ff parameters for Inositol Pyrophosphate with the help of PRODRG server (as mentioned in the tutorials) as a residue and added it to aminoacids.rtp. I was able to reformat atoms, bonds, bond angles, dihedrals and impropers by imitating the format of the definitions of other residues. But I am not able to find a format for [ pairs ] which I believe is important for my experiment. Particular pairs defined in ffnonbonded.itp file of gromos54A7 do not have codes unlike dihedrals. If I have two columns for individual atoms (e.g. O35 tab P15 tab ?) involved in the interaction, how do I point the program to corresponding c6 and c12 values in ffnonbonded.itp? Secondly, the parameter file generated by PRODRG does not contain all the possible dihedral in Inositokl Pyrophosphate. Are those interactions ignored because of some criteria or should I add them manually? From: ashish.bih...@outlook.com To: gmx-us...@gromacs.org Subject: RE: Energy minimization for Inositol pyrophosphate Date: Fri, 30 Jan 2015 12:25:43 +0530 Hello, I understand that .itp is a topology file. But the swissparam pack does not have a .gro file. How do I create solvated.gro ? (which has to be used in grompp) Is there a way to convert .itp in .top and .gro ? GROMACS tutorial does not say whether .itp can sustitute the other two. From: ashish.bih...@outlook.com To: gmx-us...@gromacs.org Subject: Energy minimization for Inositol pyrophosphate Date: Thu, 29 Jan 2015 16:53:56 +0530 Hello all, I have generated a PDB file for Inositol pyrophosphate using pyMOL to build the structure from scratch. I have converted PDB to mol2 with openbabel and submitted at Swissparam (http://swissparam.ch/) which returned a zip file containing various files of the same name (.itp, .rtp, .crd, .par, .psf, .pdb, .mol2). 1. Which of these files can be used as force field to generate a topology ? How do I include that file ?pdb2gmx -f IP7270115.pdb -ff XXX -water spc 2. The regular ffs among the options do not recognize IP7 as its atoms can not be associated with protein/DNA/RNA. All the ffs are based on amino acid residues and nucleotides. Is there an atom based ff available for GROMACS which I can place in ff directory and include in ff list ? (like MMFF)Swissparam help page says that I have to change the UNK1/LIG notation in pdb file to CHARMM27 terminology for those atoms. I am having a hard time figuring out what names these atoms should be assigned since all the atoms are unknown. I am attaching the pack returned by swissparam and the version of pdb file with some name modifications.Thank you. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
Dear Mark,Victor,Szilard So many thanks for your helpful comments! Cheers On Tue, Feb 24, 2015 at 11:10 PM, mah maz mahma...@gmail.com wrote: Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] several runs
On Tue, Feb 24, 2015 at 10:54 PM, Victor Rosas Garcia rosas.vic...@gmail.com wrote: Perhaps I can be of help here. Except for the smallest systems, GROMACS simulations are very compute-intensive, so a single machine is needed for a single job. In many instances, several machines are required for a single job (provided you have a fast enough network). If you give a single machine more than one compute job, or if you require of it more computing power than if physically has (like asking for 8 cpus when the machine only has 6) that's called oversubscribing the machine. It causes severe performance degradation, as the machine has more work than it can handle. Not only in terms of CPU, but also because the input/output channels will saturate So, if a single simulation requires all the resources in your machine, running several will make it unusable. That is true and most of the above is in general reasonable, but... It is a lot better to run a single job per machine. this very much depends on the kind of simulations you're doing and the machine you're using! [Most often] Parallelizing isn't free and scaling isn't perfect! Hence, if you don't have a big enough problem and/or you anyway have multiple runs to do on a fixed amount of hardware, you may as well do it in parallel and benefit from more efficient but narrower runs with an end result of increased aggregate simulation throughput. Think of single molecule solvation FE calculation with FEP. You'll have 10-20 simulations to run, but the input will likely be quite small (few thousand atoms). If you have a 4-socket 16-core machine, you will definitely not want to parallelize this tiny system on 64 cores, but rather run e.g. 16 runs on 4 cores each. The same applies even if you have only 8 cores! Running 8 single core runs will give you better aggregate throughput than running eight runs sequentially on 8-cores each. So to conclude, it is *not always* better to run a single job per machine - especially if you have multiple independent runs to do a limited amount of resources. Cheers, -- Szilárd Hope this helps. Victor 2015-02-24 13:40 GMT-06:00 mah maz mahma...@gmail.com: Hi Justin, Thank you for your answer! If you may help with any of these questions I would be greatly grateful; How can I understand if they were interfered? How is pinning? Any other ways you can recommend? Thanks a lot On Tue, Feb 24, 2015 at 10:14 PM, mah maz mahma...@gmail.com wrote: Is running simulations in several terminals problematic? On Tue, Feb 24, 2015 at 8:42 PM, mah maz mahma...@gmail.com wrote: Dear all, How can I perform several simulations simultaneously (in linux)? thank you! -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Umbrella Sampling Dissociation Constant
Error in equation, see the equation at the bottom of this wiki page: http://en.wikipedia.org/wiki/Binding_constant From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [gromacs.org_gmx-users-boun...@maillist.sys.kth.se] on behalf of Alexander Law [alexander@pg.canterbury.ac.nz] Sent: Wednesday, February 25, 2015 1:21 PM To: gmx-us...@gromacs.org Subject: [gmx-users] Umbrella Sampling Dissociation Constant Is it possible to use the Gibbs free energy value produced from an umbrella sampling experiment to calculate the dissociation constant? The PMF difference between the energy minimum to the plateau region is -18 kcal mol-1 what is the Kd value? Is the following equation applicable?: Kd =e^(***G/RT) Many Thanks, Alex This email may be confidential and subject to legal privilege, it may not reflect the views of the University of Canterbury, and it is not guaranteed to be virus free. If you are not an intended recipient, please notify the sender immediately and erase all copies of the message and any attachments. Please refer to http://www.canterbury.ac.nz/emaildisclaimer for more information. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. This email may be confidential and subject to legal privilege, it may not reflect the views of the University of Canterbury, and it is not guaranteed to be virus free. If you are not an intended recipient, please notify the sender immediately and erase all copies of the message and any attachments. Please refer to http://www.canterbury.ac.nz/emaildisclaimer for more information. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Umbrella Sampling Dissociation Constant
Is it possible to use the Gibbs free energy value produced from an umbrella sampling experiment to calculate the dissociation constant? The PMF difference between the energy minimum to the plateau region is -18 kcal mol-1 what is the Kd value? Is the following equation applicable?: Kd =e^(***G/RT) Many Thanks, Alex This email may be confidential and subject to legal privilege, it may not reflect the views of the University of Canterbury, and it is not guaranteed to be virus free. If you are not an intended recipient, please notify the sender immediately and erase all copies of the message and any attachments. Please refer to http://www.canterbury.ac.nz/emaildisclaimer for more information. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Computing Resource - Laptop
How many? What kind of simulations and analysis (asking to know if you'll need many cores, fewfast cores, GPUs, etc.)? 4000 CAD is a quite decent sum, it should get you at least 2 fast workstations. -- Szilárd On Tue, Feb 24, 2015 at 9:36 PM, Douglas Grahame dgrah...@uoguelph.ca wrote: Sorry I meant desktop, laptop must have been a mental error as I'm looking for a new personal laptop. -Douglas Grahame -Original Message- From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Szilárd Páll Sent: February 24, 2015 3:20 PM To: Discussion list for GROMACS users Subject: Re: [gmx-users] Computing Resource - Laptop Did you mean laptop, desktop or both? I To be honest, would not use laptops for anything but lightweight analysis tasks. -- Szilárd On Tue, Feb 24, 2015 at 6:10 PM, Douglas Grahame dgrah...@uoguelph.ca wrote: Hey everyone I'm not sure if this is the place to post this or not so my apologies if it is not. Our lab recently got some funds to put towards a desktop for molecular dynamics work and we have a budget of aprx. $4,000 CDN for the laptop. Given that I am not an expert in the hardware area, nor do I have a ton in the simulation area either, I wanted to see if there was any suggestions or resources or even experiences that this mailing list may have so that we can get the most out of our money. Primarily the computer will be used to run GROMACS and be used for analysis and some small scale simulation work. We do have access to supercomputing clusters which will serve as the primary resource for modelling. Thanks for your help in advance! -Douglas Grahame --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Beyond the KALP15 in DPPC tutorial and doing analysis in GROMACS
Dear gmx-users, Ok Justin here is the information you asked for: See Figure 3 for the antimcirobial bacterial-type membrane disruption models that the antimicrobial peptide maximin 3 might be using. Note that all models require the interaction of several maximin 3 molecules not just one versus the membrane. Once I figure out how to get good data with one maximin 3 molecule I might have time to use GROMACS to simulate several maximin 3 molecules to try to find the correct model. Then future work would be to use GROMACS to figure out how to decrease the toxicity of maximin 3 to mammalian-type membranes enough that maximin 3 would be a viable topical, injectable, and in pill form antibiotic. Also I put on dropbox my folder with my stuff that I completed the KALP15 in DPPC tutorial with:https://www.dropbox.com/s/pn2xzsoxs7n7uag/KALP.zip?dl=0 Antimicrobial Peptides (AMPs) AMPs have four general mechanisms for antimicrobial activity not including their antiviral activity. The first mechanism is thought to be the killing mechanism of the majority of eukaryotic AMP, therefore since Bombina maxima is a eukaryote, maximin 3 is probably using the first mechanism. The first mechanism is the formation of ion channels or pores across the cytoplasmic membrane of bacteria, which causes membrane perturbation, dissipation of the electrochemical gradient across the cell membrane, and loss of cell content (Parisien, Allain, Zhang, Mandeville, Lan, 2007) . The other three mechanisms are used by other antimicrobial peptides. The second mechanism is inhibition of cell wall biosynthesis (Parisien, Allain, Zhang, Mandeville, Lan, 2007) . The third mechanism kills bacteria by the AMP having RNase or DNase activity (Parisien, Allain, Zhang, Mandeville, Lan, 2007) . The fourth mechanism is used by phage tail-like bacteriocins to kill other (Bacteriocin, n.d.) bacteria through specific binding of bacteriocins to the bacterial receptor, which provokes dispolarization and perforation of the cytoplasmic membrane, inducing membrane perturbations (Parisien, Allain, Zhang, Mandeville, Lan, 2007) . Bombina maxima is a eukaryote so its maximin 3 is likely using the first mechanism. Of the first mechanism (formation of ion channels or pores), the three most cited models of antimicrobial activity are the barrel-stave, carpet, and toroidal pore models (Chan, Prenner, Vogel, 2006) . In the barrel-stave model (Figure 3 part A), the AMP spans the membrane and forms a pore lined with peptides such that the hydrophobic side of the AMP is exposed to the lipid and the hydrophilic portion of the AMP is exposed to the interior of the barrel (Brogden K. A., 2005) , and the pore dissipates proton gradients, etc. In the carpet model (Figure 3 part B), the AMPs line up parallel to the membrane surface and form a peptide carpet. This is followed by a detergent-like action induced by the AMPs that causes pore formation by ejecting micelles. In the toroidal pore model (Figure 3 part C) pores of various lifetimes are created, containing AMPs as well as lipid molecules that are curved inwards towards the pore in a continuous fashion from the surface of the membrane. After transient pore formation using the heads of the lipids to form the interior edge (Bertelsen, Dorosz, Hansen, Nielsen, Vosegaard, 2012) , the AMPs end up in both leaflets of the bilayer, which presents a mechanism of shuttling the peptides inside. Longer-lived toroidal pores may have a lethal effect similar in mechanism to barrel-stave pores. In the molecular electroporation model (Figure 3 part D) the cationic AMPs associate with the bacterial membrane and generate an electrical potential difference across the membrane. When the potential difference reaches 0.2 V, it is thought that pores will be generated through electroporation. The sinking raft model (Figure 3 part E) proposes that binding of the amphipathic AMPs causes a mass imbalance and consequently, an increase in local membrane curvature. As the AMPs self-associate, they sink into the membrane, creating transient pores which result in the AMPs residing in both leaflets after their resolution. All these antimicrobial mechanisms derive from hydrophobic, hydrophilic, and charged interactions of the AMPs with the membrane (Paulson, 2013). https://www.dropbox.com/s/33l10m423mqqpu2/AMPactivity.jpg?dl=0Figure 3: Five models of first mechanism (formation of ion channels or pores) AMP activity (Chan, Prenner, Vogel, 2006) . Red represents a hydrophilic surface, while blue represents a hydrophobic surface. A, B, and C all start from the same conformation, with the AMPs associating with the bacterial membrane (top left). (A) barrel-stave model, (B) carpet model, (C) toroidal pore model, (D) molecular electroporation model, (E) sinking raft model Previous Work The purpose of the research is to continue the work of the previous students on this project.