Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Dave M
Hi Paul, I just jumped in this discussion. But I am wondering is CUDA_VISIBLE_DEVICES equivalent to providing gpu_id in mdrun? Also, my multiple simulations run slower in the same node with multiple gpus. e.g. in a node with 4 GPU and 64 CPU mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on mpirun

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Paul bauer
Hello, the error you are getting in the end means that your simulation likely does not use PME, or uses it in a way that is not implemented to run on the GPU. You can still run the nonbonded calculations on the GPU, just remove the -pme gpu flag. For running different simulations on your

[gmx-users] Error: Atomtype CH2 not found

2019-12-12 Thread Muthusankar
Dear Gromacs users, I am simulating a protein-ligand complex and performing the grompp command before adding ions to the system. I got the error. *Fatal error: *(file: ligand.itp) Atomtype *CH2 not found*. *command used:* gmx grompp -f ions.mdp -c protein_box.gro -p protein.top -o ions.tpr Please

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Thanks Nikhil. About the second question. It is actually implemented , as you can see the link below, however I cannot run these commands without error. https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-July/126012.html On Fri, Dec 13, 2019 at 12:28 PM Nikhil Maroli wrote: >

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Nikhil Maroli
You can assign part of the core and 1 GPU to one job and another part with separate command. For ex. 1. gmx mdrun -ntmpi XX -ntomp YY -gpu_id K1 2. gmx mdrun -ntmpi XX2 -ntomp YY2 -gpu_id K2 the second part of the question is related to the implementation of such calculations in GPU< which is

Re: [gmx-users] Center of mass motion removal

2019-12-12 Thread Dallas Warren
You could do that (maintain the "center" of the droplet in the frame) is to remove those molecules that exit and return to the droplet, and then use the remaining molecules as the index group to determine the COM to which things are then centered on. Catch ya, Dr. Dallas Warren Drug Delivery,

[gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Hello all, I am running a polymer melt with 10 atoms, 2 fs time step, PME, on a workstation with specifications: 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU 2X16B DDR4 RAM 2XRTX 2080Ti 11 GB I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using: cmake ..

Re: [gmx-users] Center of mass motion removal

2019-12-12 Thread Alex
Dear Justin, As you recommended I invoked the trjconv -pbc cluster and trjconv -center separatedly, and also, I used a proper reference .gro file (where everything is intact) in tpr file. Here are the command I invoked in order on my trajectory: 1. gmx trjconv -f orig.xtc -o mol.xtc -pbc mol ...

Re: [gmx-users] How to assign PME ranks to particular nodes?

2019-12-12 Thread Mark Abraham
Hi, On Thu., 12 Dec. 2019, 20:27 Marcin Mielniczuk, wrote: > Hi, > > I'm running Gromacs on a heterogenous cluster, with one node > significantly faster than the other. Therefore, I'd like to achieve the > following setup: > * run 2 or 3 PP processes and 1 PME process on the faster node (with a

Re: [gmx-users] gmx spatial error

2019-12-12 Thread Dallas Warren
Get more memory or process a smaller amount (atoms or trajectory). http://manual.gromacs.org/documentation/current/user-guide/run-time-errors.html#out-of-memory-when-allocating Catch ya, Dr. Dallas Warren Drug Delivery, Disposition and Dynamics Monash Institute of Pharmaceutical Sciences,

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Ramon Guixà
Hi Christian, I see, ok. Thanks a lot for your feedback, really helpful! Best, Ramon On Thu, Dec 12, 2019 at 1:22 PM Christian Blau wrote: > Hi Ramon, > > > You would not need to keep your system constrained in the state that you > started from to see a transition. Pushing state > A , based

[gmx-users] How to assign PME ranks to particular nodes?

2019-12-12 Thread Marcin Mielniczuk
Hi, I'm running Gromacs on a heterogenous cluster, with one node significantly faster than the other. Therefore, I'd like to achieve the following setup: * run 2 or 3 PP processes and 1 PME process on the faster node (with a lower number of OpenMP threads) * run 2 PP processes on the slower node

Re: [gmx-users] How to properly use tune_pme?

2019-12-12 Thread Marcin Mielniczuk
Hi! I've just realized that my reply stayed in drafts. This worked for me, thanks a lot! Btw. if anyone else encounters such a problem: tune_pme works out of the box only if a shared filesystem is present. Otherwise, the modified tpr files need to be copied manually between the nodes. Regards,

[gmx-users] setting distance restraints between atoms belonging to different residues

2019-12-12 Thread Sadaf Rani
Hi gromacs users I want to restraint distance, angles and dihedrals between atoms of two different residues; In my case protein atom and ligand atom. Could anybody suggest me the right way of doing it? I have set topology file of complex as below:- Include forcefield parameters #include

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread RAHUL SURESH
Hi Mark Multiple energy groups is not implemented for GPUs, falling back to the CPU. For better performance, run on the GPU without energy groups and then do gmx mdrun -rerun option on the trajectory with an energy group .tpr file. I got this line on the log file. gmx_mpi mdrun -v -deffnm md

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread Mark Abraham
Hi, On Thu, 12 Dec 2019 at 15:03, Paul Buscemi wrote: > What does nvidia-smi tell you? > That won't inform - GROMACS isn't saying it can't find GPUs. It's saying it can't run on them because something Rahul asked for isn't implemented. Mark PB > > > On Dec 12, 2019, at 7:30 AM, John

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread Paul Buscemi
What does nvidia-smi tell you? PB > On Dec 12, 2019, at 7:30 AM, John Whittaker > wrote: > > Hi, > >> Hi Users. >> >> I am simulating a peptide of 40 residues with small molecules using oplsaa >> ff in Gromacs 2018.20 installed in CUDA environment.. The workstation has >> 16 Cores and 2

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread Mark Abraham
Hi, On Thu, 12 Dec 2019 at 14:35, Mateusz Bieniek wrote: > Hi Gromacs, > > A small digression: Ideally Gromacs would make it more clear in the error > message explaining which part is not implemented for the GPUs. > Indeed, and this case it is supposed to have already written on the log file

Re: [gmx-users] Gromacs 2019.4 compliantion with GPU support

2019-12-12 Thread Mark Abraham
Hi, I suspect that you have multiple versions of hwloc on your system, and somehow the environment is different at cmake time and make time (e.g. different modules loaded?). If so, don't do that. Otherwise, cmake -DGMX_HWLOC=off will work well enough. I've proposed a probably fix for future 2019

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread Mateusz Bieniek
Hi Gromacs, A small digression: Ideally Gromacs would make it more clear in the error message explaining which part is not implemented for the GPUs. Thanks, Mat On Thu, 12 Dec 2019 at 13:01, RAHUL SURESH wrote: > Hi John > > Thank you and adding here the mdp settings > > title =

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread RAHUL SURESH
Hi John Thank you and adding here the mdp settings title = Protein-gas molecule interaction MD simulation ; Run parameters integrator = md nsteps = 2 dt = 0.002 ; 2 fs ; Output control nstxout = 5000; nstvout = 5000 ; nstenergy = 5000;

Re: [gmx-users] GPU performance, regarding

2019-12-12 Thread John Whittaker
Hi, > Hi Users. > > I am simulating a peptide of 40 residues with small molecules using oplsaa > ff in Gromacs 2018.20 installed in CUDA environment.. The workstation has > 16 Cores and 2 1080Ti card On execution of command gmx_mpi mdrun -v > -deffnm > md for 100ns it shows no usage of gpu card.

[gmx-users] GPU performance, regarding

2019-12-12 Thread RAHUL SURESH
Hi Users. I am simulating a peptide of 40 residues with small molecules using oplsaa ff in Gromacs 2018.20 installed in CUDA environment.. The workstation has 16 Cores and 2 1080Ti card On execution of command gmx_mpi mdrun -v -deffnm md for 100ns it shows no usage of gpu card. For the command

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Christian Blau
Hi Ramon, You would not need to keep your system constrained in the state that you started from to see a transition. Pushing state A , based on a crystal structure, into state B , described by a density, should give you just the transition you want. If you would like to choose an arbitrary

[gmx-users] question regarding gmx helix orientation

2019-12-12 Thread SHAHEE ISLAM
hi, i want to calculate the tilt angle of helix against the bilayer normal. First i made a index file which contains the backbone of the residues of large helix. I am using this command *gmx helixorient -s *.tpr -f *.xtc -n helix.ndx -oaxis -ocenter -orise -oradius -otwist -obending -otilt -orot*

[gmx-users] Gromacs 2019.4 compliantion with GPU support

2019-12-12 Thread bonjour899
Hello, I'm trying to install gromacs-2019.4 with GPU support, but was always wrong. I ran cmake as cmake3 .. -DCMAKE_INSTALL_PREFIX=~/gromacs/gmx2019.4 -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_HWLOC=ON It works but when installing always get error

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Ramon Guixà
Hi Christian, Ok, so If can't apply forces from two density maps at the same time then I don't see how can I perform a guided transition. I mean, I figured I would need to gradually switch on one map while switching off the other, but for this I need to apply both restraints at the same time

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Christian Blau
Hi Ramon, This feature will be released in January ;) and is in a beta version, soon to be release candidate, so you are definitely an early adopter. Most of the parameters are set by default in the .mdp file if you do not set them yourself, you'll see them also in

Re: [gmx-users] What is the use of -nice option in gmx mdrun?

2019-12-12 Thread Justin Lemkul
On 12/12/19 12:49 AM, atb files wrote: Hello Experts,What is the use of -nice option in gmx mdrun? What it does? When to use it? What are it’s benefits?-YogeshSent using Zoho Mail https://en.wikipedia.org/wiki/Nice_(Unix) -Justin --

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Ramon Guixà
Hi Christian, thanks a lot for your answer. Good to know this makes sense to someone else too. Now I have two questions out of ignorance (never used this newly implemented feature before): 1) Can I apply two sets of forces (restraints) at the same time, namely one for each density map? 2) Would

Re: [gmx-users] Signal: Floating point exception (8) Signal code: Floating point divide-by-zero (3)

2019-12-12 Thread Dave M
Hi Paul, Thanks. My replies below: On Thu, Dec 12, 2019 at 2:32 AM Paul bauer wrote: > Hello, > > I'll have a look later today. > Can you also give as information about your build configuration? > If I am not wrong you mean this: -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=on

Re: [gmx-users] Signal: Floating point exception (8) Signal code: Floating point divide-by-zero (3)

2019-12-12 Thread Paul bauer
Hello, I'll have a look later today. Can you also give as information about your build configuration? If you could upload a log file from mdrun to Redmine it should be enough. Thanks Paul On 12/12/2019 11:28, Dave M wrote: Hi Mark, Thanks just opened new issue. My all simulations are

Re: [gmx-users] Trajectory guided by density maps

2019-12-12 Thread Christian Blau
Hi Ramon, This is definitely a way to obtain a transition path. One things to consider here is that you deliberately "throw away" already perfect information about your target structure, so it might be harder to reach. On the other hand you might not care to have an exact overlap with the

Re: [gmx-users] Signal: Floating point exception (8) Signal code: Floating point divide-by-zero (3)

2019-12-12 Thread Dave M
Hi Mark, Thanks just opened new issue. My all simulations are performed using gromacs2019.4 and to be consistent to report the results I would like to keep same version. But this specific case is creating trouble. What is best I can do? if I use a different gromacs version for this specific case

[gmx-users] gmx spatial error

2019-12-12 Thread Apramita Chand
Dear All, I'm coming across the error "Failed to calloc -9223372036854775808 elements of size 8 for bin" while running gmx spatial. I have three four systems and for the others -nab option with 100 worked fine but here, even after increasing -nab to 1000 , it shows insufficient memory. What could

Re: [gmx-users] Signal: Floating point exception (8) Signal code: Floating point divide-by-zero (3)

2019-12-12 Thread Mark Abraham
Hi, That sounds very much like a bug, but it's hard to say where it comes from. Can you please open an issue at https://redmine.gromacs.org/ and attach your .tpr files plus a log file from a failing run and the above stack trace? Mark On Thu, 12 Dec 2019 at 08:37, Dave M wrote: > Hi All, > >