Re: [gmx-users] failing of g_membed
Hi, As I'm sure you know from reading up on membed, it works by scaling the embedded protein to be tiny and turning off its interactions. The mostly likely issue is that you didn't give those instructions correctly. Double check the documentation, and use a recent version of GROMACS to get the bug fixes that have happened since your version, which I can see is several years old. Mark On Tue, Jan 8, 2019 at 9:21 AM Netaly Khazanov wrote: > Hi All, > I have an aligned structure of protein+ligands in the membrane (a mixture > of DPPC and DOPC). > The structure is solvated with water (gmx solvate -cp > protein_ATP_membrane_box.gro -cs -o protein_ATP_membrane_box_solv.gro -p > topol1.top) > And I used it as input file for g_membed. > grompp_d -c protein_ATP_membrane_box_solv.gro -p topol1.top -f membed.mdp > -o membed.tpr -n > g_membed_d -f membed.tpr -p topol1.top -o membed_out.trr -x membed_out.xtc > -c membed_out.gro -e -n -xyinit 0.1 -xyend 1.0 -nxy 1000 or > g_membed_d -f membed.tpr -p topol1.top -o membed_out.trr -x membed_out.xtc > -c membed_out.gro -e -n -xyinit 0.1 -xyend 1.0 -nxy 1000 -zinit 1.1 -zend > 1.0 -nz 100 > it fails with Fatal error: > Step 0, time 0 (ps) LINCS WARNING > relative constraint deviation after LINCS: > rms 175118.647912, max 39729618.834230 (between atoms 86781 and 86783) > bonds that rotated more than 90 degrees: > > Too many LINCS warnings (9639) > If you know what you are doing you can adjust the lincs warning threshold > in your mdp file > I tried to reduce the time step, it didn't help. > Where should it look in my structure. It is obvious that there are a lot of > overlapping atoms of protein with membrane/water. > Thank you for any help. > Netaly > > > -- > Netaly > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI
Hi, You'll get a smooth life if your CUDA driver and runtime are both at least as recent as your device. Your runtime is 8.0 in some cases, and the 384 driver is not very recent. Mark On Wed, Jan 9, 2019 at 1:58 AM paul buscemi wrote: > > > > On Jan 8, 2019, at 6:29 PM, paul buscemi wrote: > > > > I just built from a similar situation but also went to Ubuntu Mint Tara > 19 , cuda runtime 10 ( used the Nvidia web site .run version not the deb - > do not install the driver from the toolkit -- add the 410 driver from the > PPA) The system is quite happy. forgot to add use gcc-6. also Gromacs v > 19. Under 3 hrs for the entire installation. It’s really fairly painless > > > > I believe I ran across some information that suggests that the mixture > of Runtime 8, Cuda driver 9 and Ubuntu 16 is not a good mix. I’ll try to > look for it later if you need further information > > > > Paul > > > >> On Jan 8, 2019, at 2:14 PM, David van der Spoel > wrote: > >> > >> Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.: > >>> Dear all, > >>> recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a > new > >>> GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384. > >>> Problem: GPU not detected during MD run. Details are as follows: > >> > >> Try upgrading to gromacs 2019. > >> > >>> 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible > GPUs > >>> Hardware detected: > >>> But deviceQuery as follows > >>> 2) ./deviceQuery > >>> ./deviceQuery Starting... > >>> CUDA Device Query (Runtime API) version (CUDART static linking) > >>> Detected 1 CUDA Capable device(s) > >>> Device 0: "GeForce GTX 1080 Ti" > >>> CUDA Driver Version / Runtime Version 9.0 / 8.0 > >>> CUDA Capability Major/Minor version number:6.1 > >>> Total amount of global memory: 11169 MBytes > (11711807488 > >>> bytes) > >>> (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores > >>> GPU Max Clock rate:1658 MHz (1.66 GHz) > >>> Memory Clock rate: 5505 Mhz > >>> Memory Bus Width: 352-bit > >>> L2 Cache Size: 2883584 bytes > >>> Maximum Texture Dimension Size (x,y,z) 1D=(131072), > 2D=(131072, > >>> 65536), 3D=(16384, 16384, 16384) > >>> Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers > >>> Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 > >>> layers > >>> Total amount of constant memory: 65536 bytes > >>> Total amount of shared memory per block: 49152 bytes > >>> Total number of registers available per block: 65536 > >>> Warp size: 32 > >>> Maximum number of threads per multiprocessor: 2048 > >>> Maximum number of threads per block: 1024 > >>> Max dimension size of a thread block (x,y,z): (1024, 1024, 64) > >>> Max dimension size of a grid size(x,y,z): (2147483647, 65535, > 65535) > >>> Maximum memory pitch: 2147483647 bytes > >>> Texture alignment: 512 bytes > >>> Concurrent copy and kernel execution: Yes with 2 copy > engine(s) > >>> Run time limit on kernels: Yes > >>> Integrated GPU sharing Host Memory:No > >>> Support host page-locked memory mapping: Yes > >>> Alignment requirement for Surfaces:Yes > >>> Device has ECC support:Disabled > >>> Device supports Unified Addressing (UVA): Yes > >>> Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 > >>> Compute Mode: > >>> < Default (multiple host threads can use ::cudaSetDevice() with > device > >>> simultaneously) > > >>> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA > Runtime > >>> Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti > >>> Result = PASS > >> > >> > >> -- > >> David van der Spoel, Ph.D., Professor of Biology > >> Head of Department, Cell & Molecular Biology, Uppsala University. > >> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205 > <018-471%2042%2005>. > >> http://www.icm.uu.se > >> -- > >> Gromacs Users mailing list > >> > >> * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > >> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > >> > >> * For (un)subscribe requests visit > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > > > > -- > > Gromacs Users mailing list > > > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > > > * For (un)subscribe requests visit > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > > -- >
Re: [gmx-users] Shell platform for GROMACS on Windows?
Hi, There's no reason it shouldn't work, but we don't test it. I have heard of reports that setting thread affinity doesn't work with Cygwin, and that will make mdrun slow. Mark On Wed, Jan 9, 2019 at 2:47 AM Neena Susan Eappen < neena.susaneap...@mail.utoronto.ca> wrote: > Thank you Wahab, > > > Is there any issue with using Cygwin on Windows using GROMACS? > > > Neena > > > > From: Neena Susan Eappen > Sent: Tuesday, January 8, 2019 4:07 AM > To: gromacs.org_gmx-users@maillist.sys.kth.se > Subject: [gmx-users] Shell platform for GROMACS on Windows? > > > Hello GROMACS users, > > > To run GROMACS on Windows OS, what is the best shell interface? Cygwin? or > anything else? > > > Thank you, > > Neena > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] How can I install same gromacs on few servers
Hi, If you have a heterogenous cluster and want optimal performance, then you will need multiple GROMACS installations. Doing them on a network shared filesystem and mounting the correct one for each machine is the way to be efficient, but in some cases a local installation on each machine will perform better. Mark On Tue, Jan 8, 2019 at 3:44 PM Shlomit Afgin wrote: > > Hi, > We have few CentOS 7 servers that use the same application stack. > I compile Gromacs on one of them and I want to be able to run Gromacs from > all of them. > I run it from server that Gromacs not compile on it, I got: > Illegal instruction > > What it the best way to install Gromacs for all of them? > Shlomit > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] REMD Plots
Hi Shan, I am not quite sure if you want to generate an REMD simulation mobility in temperature space for the 30 replicas. If that be the case, then you can use the data in the replica_temperature.xvg file to plot replica index vs REMD steps. The 1st column in the file corresponds to the REMD steps and 2nd to 31st correspond to the mobility of replicas 0 to 29. Hope this helps? cheers Joel On Wed, 9 Jan 2019 at 13:23, Shan Jayasinghe wrote: > Dear Gromacs users, > > How do we plot a graph for temperature vs swap step number using a REMD > simulation with 30 systems. I already generated the replica_temp.xvg and > replica_index.xvg files using demux.pl script. > > Thank you. > > Best Regards > Shan Jayasinghe > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before > posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or > send a mail to gmx-users-requ...@gromacs.org. > -- Joel Baffour Awuah PhD Candidate *Institute for Frontier Materials* *Deakin University* *Waurn Ponds, 3126 VIC* *Australia +61450070635* -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Running Replica Exchange with Solute Tempering (REST2) with CHARMM36M
Hello, I am trying to figure out how to run Replica Exchange with Solute Tempering (REST2) simulation using CHARMM 36m force filed. The only method that I found so far involves using Plumed (which would have worked for me). However, the instructions I found ( https://plumed.github.io/doc-v2.4/user-doc/html/hrex.html) explicitly say that it will not work with CHARMM cmap. Thus, I was wondering if anyone knows an alternative method of run REST2 simulations in GROMACS in general or how to solve the cmap problem mentioned in the instructions. Also, if anyone knows anything about parallel continuous simulated tempering (PCST) implementation in GROMACS that can also be helpful. Thank you in advance, Aram -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] REMD Plots
Dear Gromacs users, How do we plot a graph for temperature vs swap step number using a REMD simulation with 30 systems. I already generated the replica_temp.xvg and replica_index.xvg files using demux.pl script. Thank you. Best Regards Shan Jayasinghe -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Shell platform for GROMACS on Windows?
Thank you Wahab, Is there any issue with using Cygwin on Windows using GROMACS? Neena From: Neena Susan Eappen Sent: Tuesday, January 8, 2019 4:07 AM To: gromacs.org_gmx-users@maillist.sys.kth.se Subject: [gmx-users] Shell platform for GROMACS on Windows? Hello GROMACS users, To run GROMACS on Windows OS, what is the best shell interface? Cygwin? or anything else? Thank you, Neena -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI
> On Jan 8, 2019, at 6:29 PM, paul buscemi wrote: > > I just built from a similar situation but also went to Ubuntu Mint Tara 19 , > cuda runtime 10 ( used the Nvidia web site .run version not the deb - do not > install the driver from the toolkit -- add the 410 driver from the PPA) The > system is quite happy. forgot to add use gcc-6. also Gromacs v 19. Under 3 > hrs for the entire installation. It’s really fairly painless > > I believe I ran across some information that suggests that the mixture of > Runtime 8, Cuda driver 9 and Ubuntu 16 is not a good mix. I’ll try to look > for it later if you need further information > > Paul > >> On Jan 8, 2019, at 2:14 PM, David van der Spoel wrote: >> >> Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.: >>> Dear all, >>> recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new >>> GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384. >>> Problem: GPU not detected during MD run. Details are as follows: >> >> Try upgrading to gromacs 2019. >> >>> 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs >>> Hardware detected: >>> But deviceQuery as follows >>> 2) ./deviceQuery >>> ./deviceQuery Starting... >>> CUDA Device Query (Runtime API) version (CUDART static linking) >>> Detected 1 CUDA Capable device(s) >>> Device 0: "GeForce GTX 1080 Ti" >>> CUDA Driver Version / Runtime Version 9.0 / 8.0 >>> CUDA Capability Major/Minor version number:6.1 >>> Total amount of global memory: 11169 MBytes (11711807488 >>> bytes) >>> (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores >>> GPU Max Clock rate:1658 MHz (1.66 GHz) >>> Memory Clock rate: 5505 Mhz >>> Memory Bus Width: 352-bit >>> L2 Cache Size: 2883584 bytes >>> Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, >>> 65536), 3D=(16384, 16384, 16384) >>> Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers >>> Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 >>> layers >>> Total amount of constant memory: 65536 bytes >>> Total amount of shared memory per block: 49152 bytes >>> Total number of registers available per block: 65536 >>> Warp size: 32 >>> Maximum number of threads per multiprocessor: 2048 >>> Maximum number of threads per block: 1024 >>> Max dimension size of a thread block (x,y,z): (1024, 1024, 64) >>> Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535) >>> Maximum memory pitch: 2147483647 bytes >>> Texture alignment: 512 bytes >>> Concurrent copy and kernel execution: Yes with 2 copy engine(s) >>> Run time limit on kernels: Yes >>> Integrated GPU sharing Host Memory:No >>> Support host page-locked memory mapping: Yes >>> Alignment requirement for Surfaces:Yes >>> Device has ECC support:Disabled >>> Device supports Unified Addressing (UVA): Yes >>> Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 >>> Compute Mode: >>> < Default (multiple host threads can use ::cudaSetDevice() with device >>> simultaneously) > >>> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime >>> Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti >>> Result = PASS >> >> >> -- >> David van der Spoel, Ph.D., Professor of Biology >> Head of Department, Cell & Molecular Biology, Uppsala University. >> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205. >> http://www.icm.uu.se >> -- >> Gromacs Users mailing list >> >> * Please search the archive at >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! >> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists >> >> * For (un)subscribe requests visit >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a >> mail to gmx-users-requ...@gromacs.org. > > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a > mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI
I just built from a similar situation but also went to Ubuntu Mint Tara 19 , cuda runtime 10 ( used the Nvidia web site .run version not the deb - do not install the driver from the toolkit ) added the 410 driver from the PPA and the system is quite happy. I believe I ran across some information that suggests that the mixture of Runtime 8, Cuda driver 9 and Ubuntu 16 is not a good mix. I’ll try to look for it later if you need further information Paul > On Jan 8, 2019, at 2:14 PM, David van der Spoel wrote: > > Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.: >> Dear all, >> recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new >> GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384. >> Problem: GPU not detected during MD run. Details are as follows: > > Try upgrading to gromacs 2019. > >> 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs >> Hardware detected: >> But deviceQuery as follows >> 2) ./deviceQuery >> ./deviceQuery Starting... >> CUDA Device Query (Runtime API) version (CUDART static linking) >> Detected 1 CUDA Capable device(s) >> Device 0: "GeForce GTX 1080 Ti" >> CUDA Driver Version / Runtime Version 9.0 / 8.0 >> CUDA Capability Major/Minor version number:6.1 >> Total amount of global memory: 11169 MBytes (11711807488 >> bytes) >> (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores >> GPU Max Clock rate:1658 MHz (1.66 GHz) >> Memory Clock rate: 5505 Mhz >> Memory Bus Width: 352-bit >> L2 Cache Size: 2883584 bytes >> Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, >> 65536), 3D=(16384, 16384, 16384) >> Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers >> Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 >> layers >> Total amount of constant memory: 65536 bytes >> Total amount of shared memory per block: 49152 bytes >> Total number of registers available per block: 65536 >> Warp size: 32 >> Maximum number of threads per multiprocessor: 2048 >> Maximum number of threads per block: 1024 >> Max dimension size of a thread block (x,y,z): (1024, 1024, 64) >> Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535) >> Maximum memory pitch: 2147483647 bytes >> Texture alignment: 512 bytes >> Concurrent copy and kernel execution: Yes with 2 copy engine(s) >> Run time limit on kernels: Yes >> Integrated GPU sharing Host Memory:No >> Support host page-locked memory mapping: Yes >> Alignment requirement for Surfaces:Yes >> Device has ECC support:Disabled >> Device supports Unified Addressing (UVA): Yes >> Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 >> Compute Mode: >> < Default (multiple host threads can use ::cudaSetDevice() with device >> simultaneously) > >> deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime >> Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti >> Result = PASS > > > -- > David van der Spoel, Ph.D., Professor of Biology > Head of Department, Cell & Molecular Biology, Uppsala University. > Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205. > http://www.icm.uu.se > -- > Gromacs Users mailing list > > * Please search the archive at > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists > > * For (un)subscribe requests visit > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a > mail to gmx-users-requ...@gromacs.org. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI
Den 2019-01-08 kl. 20:33, skrev Adarsh V. K.: Dear all, recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384. Problem: GPU not detected during MD run. Details are as follows: Try upgrading to gromacs 2019. 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs Hardware detected: But deviceQuery as follows 2) ./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1080 Ti" CUDA Driver Version / Runtime Version 9.0 / 8.0 CUDA Capability Major/Minor version number:6.1 Total amount of global memory: 11169 MBytes (11711807488 bytes) (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores GPU Max Clock rate:1658 MHz (1.66 GHz) Memory Clock rate: 5505 Mhz Memory Bus Width: 352-bit L2 Cache Size: 2883584 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory:No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces:Yes Device has ECC support:Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti Result = PASS -- David van der Spoel, Ph.D., Professor of Biology Head of Department, Cell & Molecular Biology, Uppsala University. Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205. http://www.icm.uu.se -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04; upgraded with GTX1080TI
Dear all, recently upgraded Gromacs 5.1.4 with GTX 780TI on Ubuntu 16.04 with a new GPU GTX1080TI. CUDA from 7.5 to 8. Driver 384. Problem: GPU not detected during MD run. Details are as follows: 1) Running on 1 node with total 8 cores, 8 logical cores, 0 compatible GPUs Hardware detected: But deviceQuery as follows 2) ./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1080 Ti" CUDA Driver Version / Runtime Version 9.0 / 8.0 CUDA Capability Major/Minor version number:6.1 Total amount of global memory: 11169 MBytes (11711807488 bytes) (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores GPU Max Clock rate:1658 MHz (1.66 GHz) Memory Clock rate: 5505 Mhz Memory Bus Width: 352-bit L2 Cache Size: 2883584 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size(x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory:No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces:Yes Device has ECC support:Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Ti Result = PASS -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] How can I install same gromacs on few servers
Hi, We have few CentOS 7 servers that use the same application stack. I compile Gromacs on one of them and I want to be able to run Gromacs from all of them. I run it from server that Gromacs not compile on it, I got: Illegal instruction What it the best way to install Gromacs for all of them? Shlomit -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] calculate interaction energy between two proteins
hi, i want to calculate interaction energy between two protein. In my system there are two proteins,waters and ions.In my .mdp file the, energygrps = PROTEIN W ION i am using this command gmx energy -f ../dynamic.edr -s ../eq.gro -o ener.xvg then the following options are coming 1 Bond 2 G96Angle 3 Proper-Dih. 4 Improper-Dih. 5 LJ-(SR) 6 Coulomb-(SR) 7 Potential8 Kinetic-En. 9 Total-Energy10 Temperature 11 Pressure12 Constr.-rmsd 13 Box-X 14 Box-Y 15 Box-Z 16 Volume 17 Density 18 pV 19 Enthalpy20 Vir-XX 21 Vir-XY 22 Vir-XZ 23 Vir-YX 24 Vir-YY 25 Vir-YZ 26 Vir-ZX 27 Vir-ZY 28 Vir-ZZ 29 Pres-XX 30 Pres-XY 31 Pres-XZ 32 Pres-YX 33 Pres-YY 34 Pres-YZ 35 Pres-ZX 36 Pres-ZY 37 Pres-ZZ 38 #Surf*SurfTen 39 Box-Vel-XX 40 Box-Vel-YY 41 Box-Vel-ZZ 42 Coul-SR:Protein-Protein 43 LJ-SR:Protein-Protein 44 Coul-SR:Protein-W 45 LJ-SR:Protein-W 46 Coul-SR:Protein-ION 47 LJ-SR:Protein-ION 48 Coul-SR:W-W 49 LJ-SR:W-W 50 Coul-SR:W-ION 51 LJ-SR:W-ION 52 Coul-SR:ION-ION 53 LJ-SR:ION-ION 54 T-Protein 55 T-W 56 T-ION 57 Lamb-Protein58 Lamb-W 59 Lamb-ION if i select 42 and 43. will it give me the total energy value considering both proteins. or what should i correctly select. thanking you Shahee Islam -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
Re: [gmx-users] Shell platform for GROMACS on Windows?
On 08.01.2019 05:07, Neena Susan Eappen wrote: > Hello GROMACS users, > To run GROMACS on Windows OS, what is the best shell interface? Cygwin? or > anything else? On Windows, nowadays (Windows 10) GROMACS is either compiled and run: - as a native Windows program through the MS Visual Studio 2017 x64 tool-chain and Windows-cmake (https://cmake.org/download/). This allows (in theory) for GPU accelerated runs (Nvidia CUDA) but appears broken since v.2018 (and v.2019 beta). Without CUDA, compilation and run worked flawlessly last time I checked. - within the Windows Subsystem for Linux (WSL) which provides a quasi-native Ubuntu 18.04 running on the Windows file system (https://docs.microsoft.com/en-us/windows/wsl/install-win10), and allows for a "standard Linux build". WSL also has a bash and every other tool available in Ubuntu. M. -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] modified peptide parametrization
Dear Gromacs users, I would like to simulate a protein with a peptidic ligand. However the ligand has modified amino acids such as a phosphothreonine, a naphthyl alanine and a piperidine carboxylic acid. I am not familiar with modified peptide so can anyone guide me toward some webserver such as CGenFF or antechamber in the case of small molecule but for modified peptides? Or guide me on the path to follow for such parametrization. Many thanks, Nawel -- Dr Nawel Mele, T: +33 (0) 634443794 (Fr) +44 (0) 7704331840 (UK) -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.
[gmx-users] failing of g_membed
Hi All, I have an aligned structure of protein+ligands in the membrane (a mixture of DPPC and DOPC). The structure is solvated with water (gmx solvate -cp protein_ATP_membrane_box.gro -cs -o protein_ATP_membrane_box_solv.gro -p topol1.top) And I used it as input file for g_membed. grompp_d -c protein_ATP_membrane_box_solv.gro -p topol1.top -f membed.mdp -o membed.tpr -n g_membed_d -f membed.tpr -p topol1.top -o membed_out.trr -x membed_out.xtc -c membed_out.gro -e -n -xyinit 0.1 -xyend 1.0 -nxy 1000 or g_membed_d -f membed.tpr -p topol1.top -o membed_out.trr -x membed_out.xtc -c membed_out.gro -e -n -xyinit 0.1 -xyend 1.0 -nxy 1000 -zinit 1.1 -zend 1.0 -nz 100 it fails with Fatal error: Step 0, time 0 (ps) LINCS WARNING relative constraint deviation after LINCS: rms 175118.647912, max 39729618.834230 (between atoms 86781 and 86783) bonds that rotated more than 90 degrees: Too many LINCS warnings (9639) If you know what you are doing you can adjust the lincs warning threshold in your mdp file I tried to reduce the time step, it didn't help. Where should it look in my structure. It is obvious that there are a lot of overlapping atoms of protein with membrane/water. Thank you for any help. Netaly -- Netaly -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists * For (un)subscribe requests visit https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.