Re: [gmx-users] COMPEL question: Channel filter outside membrane, how to orient compartment boundaries

2020-04-24 Thread Kutzner, Carsten
Hi Erik, I am not sure that the CompEl code as it is can deal with the setup you are describing (but I may be wrong). Below are a few thoughts and things you might want to try. > Am 24.04.2020 um 00:05 schrieb Erik Henze : > > Hi, > I am attempting to study permeation events in an ion channel

Re: [gmx-users] Various questions related to Gromacs performance tuning

2020-03-28 Thread Kutzner, Carsten
> Am 26.03.2020 um 17:00 schrieb Tobias Klöffel : > > Hi Carsten, > > > On 3/24/20 9:02 PM, Kutzner, Carsten wrote: >> Hi, >> >>> Am 24.03.2020 um 16:28 schrieb Tobias Klöffel : >>> >>> Dear all, >>> I am very new to Gromacs

Re: [gmx-users] Various questions related to Gromacs performance tuning

2020-03-24 Thread Kutzner, Carsten
Hi, > Am 24.03.2020 um 16:28 schrieb Tobias Klöffel : > > Dear all, > I am very new to Gromacs so maybe some of my problems are very easy to fix:) > Currently I am trying to compile and benchmark gromacs on AMD rome cpus, the > benchmarks are taken from: >

Re: [gmx-users] Maximising Hardware Performance on Local node: Optimal settings

2019-12-04 Thread Kutzner, Carsten
Hi, > Am 04.12.2019 um 17:53 schrieb Matthew Fisher > : > > Dear all, > > We're currently running some experiments with a new hardware configuration > and attempting to maximise performance from it. Our system contains 1x V100 > and 2x 12 core (24 logical) Xeon Silver 4214 CPUs which, after

Re: [gmx-users] How to properly use tune_pme?

2019-11-22 Thread Kutzner, Carsten
Hi, > Am 21.11.2019 um 17:15 schrieb Marcin Mielniczuk : > > Hi, > > I'm trying to make use of tune_pme to find out the optimal number of PME > ranks. My command line is: > UCX_LOG_LEVEL=info MPIRUN="mpirun" ./gmx_mpi tune_pme -v -s > ../tip4p_min2 -mdrun "./gmx_mpi mdrun" -np 4 > When started

[gmx-users] Question about default auto setting of mdrun -pin

2019-10-17 Thread Kutzner, Carsten
Hi, is it intended that the thread-MPI version of mdrun 2018 does pin to its core if started with -nt 1 -pin auto? Carsten -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read

Re: [gmx-users] ion flux counter

2019-09-27 Thread Kutzner, Carsten
partment. Carsten > I could use a piece of CompEL > code to modify it for ion flux counting, maybe you have any suggestions? > > Harut > > On Thu, Sep 26, 2019 at 6:36 PM Kutzner, Carsten wrote: > >> >> >>> Am 26.09.2019 um 16:24 schrieb Harutyun Sa

Re: [gmx-users] ion flux counter

2019-09-26 Thread Kutzner, Carsten
> Am 26.09.2019 um 16:24 schrieb Harutyun Sahakyan : > > Dear Gromacs users, > > Computational electrophysiology protocol has some analyzing tools allowing > to count ion flux etc. I have used an external electric field to simulate > the process of ion permissions through a membrane channel.

Re: [gmx-users] benchmark of RIB

2019-09-03 Thread Kutzner, Carsten
Hi tarzan p, > Am 02.09.2019 um 15:57 schrieb tarzan p : > > Hi allI am sure many of you guys would have run a benchmark of RIB system as > given https://www.mpibpc.mpg.de/grubmueller/bench > > > I could manage 12ns/day with a workstation with 2 x Gold 6148 (20 cores each) > and 2 x V100

Re: [gmx-users] Analysing multiple cavities in ion Channel through CompEl simulations

2019-08-20 Thread Kutzner, Carsten
Hi Carlos, > Am 20.08.2019 um 13:12 schrieb Carlos Navarro : > > Dear gmx-users, > I'm currently investigating ion permeability in a channel that displays > multiple cavities through ComEl simulations. Unfortunately one can just > define a single cylinder per membrane. Yes, this counting of ion

Re: [gmx-users] what's the maximum number of atoms that GROMACS can simulate?

2019-07-09 Thread Kutzner, Carsten
Hi Zhang, > On 9. Jul 2019, at 15:16, 张驭洲 wrote: > > Hello, > > I want to know if there is a maximum number of atoms that GROMACS can > simulate, or if there are any bugs in the code that cause the error in > computing the coordinate of atoms when the number of atoms is very large? I'm >

Re: [gmx-users] simulation of water gradient across membrane protein

2019-07-02 Thread Kutzner, Carsten
> On 2. Jul 2019, at 10:35, Phuong Tran wrote: > > Hi all, > > I was wondering if I can use the computational electrophysiology package in > Gromacs with the double membrane setup to simulate a system that has > different numbers of waters in and out. This would need to use the grand >

Re: [gmx-users] Xeon W family vs scalable

2019-06-09 Thread Kutzner, Carsten
Hi, don’t spend all your money on a CPU - for high GROMACS performance the GPU is as important. I would recommend to add an RTX 2070/80 GPU to the workstation, and get a cheaper CPU. This will most likely give you a significantly higher GROMACS performance. See https://arxiv.org/abs/1903.05918

Re: [gmx-users] Computational electrophysiology (compEL) setup issues (Kutzner, Carsten)

2019-05-29 Thread Kutzner, Carsten
Dear Francesco, > On 29. May 2019, at 10:01, Francesco Petrizzelli > wrote: > >> A cyl0-r (and cyl1-r) of 0.7 nm is too small for a pore radius of 3 A to >> reliably track the ions, some might sneak through your channel without being >> recorded in the cylinder. Rather choose this value a

Re: [gmx-users] Computational electrophysiology (compEL) setup issues

2019-05-27 Thread Kutzner, Carsten
Dear Francesco, > On 27. May 2019, at 16:31, Francesco Petrizzelli > wrote: > > Dear gmx users, > > I'm trying to simulate change in the ion flux upon mutations using > Computational electrophysiology (compEL). > I have used packmol-memgen in order to generate the double bilayer using a >

Re: [gmx-users] »Computational Electrophysiology«

2019-04-03 Thread Kutzner, Carsten
Hi Harutyun, > On 3. Apr 2019, at 00:08, Harutyun Sahakyan wrote: > > Dear Gromacs users, > > I am trying to simulate membrane potential using computational > electrophysiology protocol. > > I modeled protein-membrane system with charmm-gui, equilibrate it and run > MD simulation. After some

Re: [gmx-users] how to increase GMX_OPENMP_MAX_THREADS

2019-02-26 Thread Kutzner, Carsten
Hi, > On 25. Feb 2019, at 21:15, Schulz, Roland wrote: > > That's not my experience. In my experience , for single node runs, it is > usually faster to use 72 OPENMP threads rather than using 72 (t)MPI threads. wouldn’t a combination of OpenMP threads * (t)MPI threads be faster, e.g. 8x9?

Re: [gmx-users] Compatibility of gromacs 2018.3 with Cuda 10

2019-02-13 Thread Kutzner, Carsten
> On 14. Feb 2019, at 08:19, Sujit Sarkar wrote: > > Dear gmx-users, > Is Gromacs 2018.3 installation supported by Cuda 10.0 toolkit? Yes. Carsten > Thanks, > Sujit > -- > Gromacs Users mailing list > > * Please search the archive at >

Re: [gmx-users] offloading PME to GPUs

2019-02-07 Thread Kutzner, Carsten
Hi, > On 7. Feb 2019, at 15:17, jing liang wrote: > > Hi, > > thanks for this information. I wonder if PME offload has been implemented > for more than one > node simulations? I tried the following command for running on two nodes > (with 4 ranks each > and 4 OpenMP threads) > > mpirun -np 8

Re: [gmx-users] GROMACS Infrastructure

2019-01-16 Thread Kutzner, Carsten
Hi Nam Pho, > On 16. Jan 2019, at 01:40, Nam Pho wrote: > > Hello GROMACS Users, > > My name is Nam and I support a campus supercomputer for which one of the > major applications is GROMACS. I was curious if anyone has optimized > servers for cost and has a blueprint for that, yes, we did! We

Re: [gmx-users] Restarting a simulation: failed to lock the log file

2019-01-07 Thread Kutzner, Carsten
> On 7. Jan 2019, at 11:55, morpheus wrote: > > Hi, > > I am running simulations on a cluster that terminates jobs after a hard > wall clock time limit. Normally this is not a problem as I just restart the > simulations using -cpi state.cpt but for the last batch of simulations I > got (for

Re: [gmx-users] Simulation Across Mulitple Nodes with GPUs and PME

2018-12-19 Thread Kutzner, Carsten
Hi, > On 18. Dec 2018, at 18:04, Zachary Wehrspan wrote: > > Hello, > > > I have a quick question about how GROMACs 2018.5 distributes GPU resources > across multiple nodes all running one simulation. Reading the > documentation, I think it says that only 1 GPU can be assigned to the PME >

Re: [gmx-users] using dual CPU's

2018-12-14 Thread Kutzner, Carsten
> On 13. Dec 2018, at 22:31, pbusc...@q.com wrote: > > Carsten, > > A possible issue... > > I compiled gmx 18.3 with gcc-5 ( CUDA 9 seems to run normally ) Should > recompile with gcc-6.4 ? I don’t think that this will make a huge impact (but maybe you get a few extra percent performance)

Re: [gmx-users] using dual CPU's

2018-12-13 Thread Kutzner, Carsten
horter runs. Carsten > my results were a compilation of 4-5 runs each under slightly different > conditions on two computers. All with the same outcome - that is ugh!. Mark > had asked for the log outputs indicating some useful conclusions could be > drawn from them. > > Pa

Re: [gmx-users] using dual CPU's

2018-12-12 Thread Kutzner, Carsten
Hi Paul, > On 12. Dec 2018, at 15:36, pbusc...@q.com wrote: > > Dear users ( one more try ) > > I am trying to use 2 GPU cards to improve modeling speed. The computer > described in the log files is used to iron out models and am using to learn > how to use two GPU cards before purchasing

Re: [gmx-users] Essential dynamics

2018-06-26 Thread Kutzner, Carsten
Hi Shreyas, > On 26. Jun 2018, at 15:21, Shreyas Kaptan wrote: > > Dear All, > > I have a question regarding the make_edi tool. I used it some time back to > restrain my structures to a particular projection value of an eigen vector > with the -restrain keyword. However, I seem to have

Re: [gmx-users] Error when for gmx tune_pme on Cray

2018-04-10 Thread Kutzner, Carsten
> On 10. Apr 2018, at 18:13, Viveca Lindahl wrote: > > Thanks. It's running now. I just had a typo in my gmx_mpi variable that > gave me the last error I posted. Using the non-mpi binary for tune_pme > solved it. BTW you might find the tune_pme -ntpr 1 switch useful if

Re: [gmx-users] Error when for gmx tune_pme on Cray

2018-04-10 Thread Kutzner, Carsten
Hi Viveca, > On 10. Apr 2018, at 15:10, Viveca Lindahl wrote: > > Hi users, > > I never used gmx tune_pme before and thought I'd try. On a Cray machine, > using aprun instead of mpirun, I did > > args="/cfs/klemming/nobackup/v/vivecal/programs/gromacs/2018.1/bin/gmx >

Re: [gmx-users] Reg. mdrun error: not all ion group molecules consist of 3 atoms

2018-03-16 Thread Kutzner, Carsten
Hi, > On 16. Mar 2018, at 02:39, 가디 장데부 고라크스나트 wrote: > > Hello,I am performing computational Electrophysiology gromacs tutorial. I > successfully pass the grompp for the compEL but failed with a Fatal error at > mdrun step, not all ion group molecules consist of 3

Re: [gmx-users] tune_pme error with GROMACS 2018

2018-02-13 Thread Kutzner, Carsten
; was no issue). > > Best, > Dan > > On Mon, Feb 12, 2018 at 8:32 AM, Kutzner, Carsten <ckut...@gwdg.de> wrote: > >> Hi Dan, >> >>> On 11. Feb 2018, at 20:13, Daniel Kozuch <dan.koz...@gmail.com> wrote: >>> >>> Hello, &g

Re: [gmx-users] tune_pme error with GROMACS 2018

2018-02-12 Thread Kutzner, Carsten
Hi Dan, > On 11. Feb 2018, at 20:13, Daniel Kozuch wrote: > > Hello, > > I was recently trying to use the tune_pme tool with GROMACS 2018 with the > following command: > > gmx tune_pme -np 84 -s my_tpr.tpr -mdrun 'gmx mdrun’ Maybe you need to compile gmx without MPI (so

Re: [gmx-users] Worse GROMACS performance with better specs?

2018-01-10 Thread Kutzner, Carsten
Dear Jason, 1.) we have observed a similar behavior comparing Intel Silver 4114 against E5-2630v4 processors in a server with one GTX 1080Ti. Both CPUs have 10 cores and run at 2.2 GHz. Using our standard benchmark systems (see https://arxiv.org/abs/1507.00898) we were able to get 74.4 ns/day

Re: [gmx-users] Performance gains with AVX_512 ?

2017-12-12 Thread Kutzner, Carsten
on. Thanks a lot for the info! Best, Carsten > > Cheers, > > -- > Szilárd > > On Tue, Dec 12, 2017 at 3:07 PM, Kutzner, Carsten <ckut...@gwdg.de> wrote: > >> Hi, >> >> what are the expected performance benefits of AVX_512 SIMD instructions

[gmx-users] Performance gains with AVX_512 ?

2017-12-12 Thread Kutzner, Carsten
Hi, what are the expected performance benefits of AVX_512 SIMD instructions on Intel Skylake processors, compared to AVX2_256? In many cases, I see a significantly (15 %) higher GROMACS 2016 / 2018b2 performance when using AVX2_256 instead of AVX_512. I would have guessed that AVX_512 is at least

Re: [gmx-users] gencof - creating inverted double bilayer systems

2017-09-30 Thread Kutzner, Carsten
Hi, > On 30. Sep 2017, at 09:50, David van der Spoel wrote: > > On 29/09/17 15:10, Carlos Navarro wrote: >> Dear all, >> I’m currently running computational electrophysiology simulations which >> need double bilayer systems. >> I know that with gencof -f input.gro -nbox 1

Re: [gmx-users] CompEL bulk offset parameter unavailable in gromacs 5.1.4?

2017-06-18 Thread Kutzner, Carsten
> On 18. Jun 2017, at 13:40, Vries, de, H.W. > wrote: > > dear all, > > I am employing the computational electrophysiology scheme in gromacs 5.1.4. > I want to set a bulk-offset parameter, such that the scheme only does > position exchanges in a region that is

Re: [gmx-users] Problems with IMD in Gromacs 2016.3

2017-06-15 Thread Kutzner, Carsten
Hi Charlie, I just made a quick check with 5.1 and 2016 and I also see the problem that you described. For me IMD works with 5.1, but not with 2016, but I don't know why. Could you file a bug report? Thank you, Carsten > On 14. Jun 2017, at 18:05, Charles Laughton >

Re: [gmx-users] Computational electrophysiology issues with membrane leakage

2017-04-11 Thread Kutzner, Carsten
Hi, the output file gives you the numbers of the atoms that seem to move from compartment A to B without passing a channel. I would look at their trajectories and inspect which path they actually take. The channel trimer also needs to be whole at the beginning of the simulation, otherwise the

Re: [gmx-users] g_membed or alternative on GROMACS 2016

2017-03-23 Thread Kutzner, Carsten
Hi, what was g_membed has been integrated into mdrun, and can be used with gmx mdrun -membed settings.dat ... Look for the membed related output of gmx mdrun -h Or check the documentation, e.g.

Re: [gmx-users] Computational electrophysiology, implicit solvent and coarse-grained system: atom/molecule definitions?

2017-03-15 Thread Kutzner, Carsten
Hi, > On 14 Mar 2017, at 11:18, Vries, de, H.W. > wrote: > > Dear all, > > I am currently trying to run the computational electrophysiology scheme on > an implicit solvent, coarse-grained system by introducing a little > workaround: > > In the manual, it is

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-13 Thread Kutzner, Carsten
t; Another interesting thing is that no one has any experience on this topic on > the gmx user mailing list :( > > Any suggestions will be appreciated. > > Thanks in advance. > >> On 12 Jan 2017, at 18:34, "Kutzner, Carsten" <ckut...@gwdg.de> wrote: >> >

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-12 Thread Kutzner, Carsten
gt; gen_seed = -1 > gen_temp = 298.15 > > ;FREE ENERGY > free-energy = yes > init-lambda = 1 > delta-lambda = 0 > sc-alpha = 0.3 > sc-power = 1 > sc-sigma = 0.25 > sc-coul

Re: [gmx-users] domain decomposition error in the energy minimization step

2017-01-11 Thread Kutzner, Carsten
Dear Qasim, those kinds of domain decomposition 'errors' can happen when you try to distibute an MD system among too many MPI ranks. There is a minimum cell length for each domain decomposition cell in each dimension, which depends on the chosen cutoff radii and possibly other inter-atomic

Re: [gmx-users] Computational Electrophysiology: how about implicit solvent?

2016-10-18 Thread Kutzner, Carsten
Hi Henry, > On 18 Oct 2016, at 14:45, Vries, de, H.W. > wrote: > > Dear all, > > > In the newest versions of GROMACS the computational electrophysiology > method (CompEL) is implemented through the swapping of solvent molecules > with ions, thus providing a

Re: [gmx-users] which version would be faster?

2016-09-30 Thread Kutzner, Carsten
> On 30 Sep 2016, at 16:31, Albert wrote: > > Hello: > > I've got a GPU workstation with two GPUs. I am just wondering which version > will gain a better performance? The MPI version, the thread version, or the > thread-openMP version? There are 20 threads (Intel(R)

Re: [gmx-users] Multi GPU Workstation with 4 Way SLI

2016-07-08 Thread Kutzner, Carsten
Hi, have a look at https://www.mpibpc.mpg.de/15070156/Kutzner_2015_JCC.pdf which should answer your questions! Best, Carsten > On 08 Jul 2016, at 12:11, Nikhil Maroli wrote: > > Dear all, > > > > We have a budget of $12,000. for GPU Workstation. We will be running

Re: [gmx-users] electrophysiology builder

2016-06-15 Thread Kutzner, Carsten
> On 15 Jun 2016, at 17:12, Nikhil Maroli wrote: > > Dear all, > I wanted to study the transport properties of channel protein in the lipid > bilayer.is there any server or tool to make sandwiches of the lipid bilayer > for studying the transport properties. The supporting

Re: [gmx-users] installation error gromacs-imd

2016-06-09 Thread Kutzner, Carsten
e master version of GROMACS that you tried to compile? For that version you will probably have to do some more adaptations of your .mdp file. There is no IMD for 4.5 and 4.6, you will need to use 5.0 or later. Carsten > > Do you have idea about the error ?? > > Thank you > Pada t

Re: [gmx-users] installation error gromacs-imd

2016-06-09 Thread Kutzner, Carsten
Hi, the error message you see has nothing to do with IMD. Have you tried to install a Gromacs 5.0 or 5.1 or 2016 version? These should all work out of the box with IMD. Best, Carsten > On 09 Jun 2016, at 04:35, Andrian Saputra wrote: > > Dear gromacs users > > i

Re: [gmx-users] interpreting the output of gmx tune_pme

2016-06-08 Thread Kutzner, Carsten
Hi, > On 07 Jun 2016, at 22:00, jing liang wrote: > > Hi, > > the output of "gmx tune_pme" in perf.out file reports the following line at > the end: > > Line tpr PME ranks Gcycles Av. Std.dev. ns/dayPME/f > DD grid > > how can I interprete

Re: [gmx-users] Ion flux through membrane protein

2016-05-26 Thread Kutzner, Carsten
Hi Maximilien, depending on what exactly the questions are that you like to address, the double-membrane setup as used in computational electrophysiology setups might be helpful for you. There is a section in the GROMACS PDF manual about that, and there are these two papers:

Re: [gmx-users] gromacs on multiple gpus

2016-05-10 Thread Kutzner, Carsten
Hi, you can use the -gpu_id command line parameter to mdrun to map available GPUs to MPI ranks. In the simplest case with two MPI ranks: mdrun -gpu_id 01 You will probably get much more performance by using more MPI ranks (check out 10.1002/jcc.24030). E.g. for 6 MPI ranks use -gpu_id 000111

Re: [gmx-users] Compiling g_correlation

2016-04-27 Thread Kutzner, Carsten
> On 27 Apr 2016, at 19:42, Jorge Fernández de Cossío Díaz > wrote: > > I compiled gromacs 3.3.4, and pointed the g_correlation Makefile to its > directory, But compilation of g_correlation still complains that it can't > find "fatal.h". Any ideas? it has been renamed

Re: [gmx-users] Bash scripting and Gromacs

2016-04-26 Thread Kutzner, Carsten
ct_04_01.html Good luck :) Carsten > > Thanks! > > J. > > 2016-04-26 12:45 GMT+02:00 Kutzner, Carsten <ckut...@gwdg.de>: >> >>> On 26 Apr 2016, at 11:22, James Starlight <jmsstarli...@gmail.com> wrote: >>> >>> Exactly th

Re: [gmx-users] Bash scripting and Gromacs

2016-04-26 Thread Kutzner, Carsten
lex_conf7/md_resp_complex_conf?.tpr Carsten > > the same for trr > > J. > > 2016-04-26 11:11 GMT+02:00 Kutzner, Carsten <ckut...@gwdg.de>: >> >>> On 26 Apr 2016, at 11:00, James Starlight <jmsstarli...@gmail.com> wrote: >>> >>> Hel

Re: [gmx-users] Bash scripting and Gromacs

2016-04-26 Thread Kutzner, Carsten
> On 26 Apr 2016, at 11:00, James Starlight wrote: > > Hello, > > I faced with the folliwing problem: > > I try to make small script which will loop several folders > corresponded to the invididual simulations and procecc each trajectory > searching them by the keyword

Re: [gmx-users] Umbrella sampling along PCA eigenvector using make_edi

2016-03-31 Thread Kutzner, Carsten
Dear Hendrik, that indeed looks a bit strange. If you provide the necessary input files for a test, I could have a look at what might be wrong. You can put them somewhere for download or email them directly to me (not to the list, they will not be accepted). Carsten > On 30 Mar 2016, at 16:11,

Re: [gmx-users] GPU configuration suggestions

2016-03-22 Thread Kutzner, Carsten
Hi Nikhil, please take a look at http://onlinelibrary.wiley.com/doi/10.1002/jcc.24030/full A great deal of your questions are answered there! Best, Carsten > On 22 Mar 2016, at 06:52, Nikhil Maroli wrote: > > Dear all, > > we would like to purchase one GPU enabled

Re: [gmx-users] Optimal hardware for running Gromacs

2016-03-12 Thread Kutzner, Carsten
Dear David, I think you will find answers to many of your questions in the following publication: http://onlinelibrary.wiley.com/doi/10.1002/jcc.24030/full Best, Carsten > On 12 Mar 2016, at 02:52, David Berquist wrote: > > I'm looking into building a desktop

Re: [gmx-users] pp/pme ratio differences between gromacs 5.0.x and 5.1.x

2016-02-08 Thread Kutzner, Carsten
Hi, you could try to manually set the number of PME nodes with -npme in 5.1. Does that reproduce the 5.0 performance? Carsten > On 07 Feb 2016, at 19:57, Johannes Wagner wrote: > > hey guys, > came across an issue with pp/pme ratio difference from 5.0 to 5.1. For a

Re: [gmx-users] g_correlation

2015-12-10 Thread Kutzner, Carsten
Hi, > On 10 Dec 2015, at 08:59, Felix W.-H. Weng wrote: > > Dear all: > > Has anyone succeeded in installing the command g_correlation in GROMACS > 4.5.5? Does it work with 3.3? > File obtained from http://www.mpibpc.mpg.de/grubmueller/g_correlation > I tried installing

Re: [gmx-users] g_correlation

2015-12-10 Thread Kutzner, Carsten
gt;>>>>>>>>>>>>>>>>>> > Wei-Hsiang Weng (翁偉翔), Master > Department of Life Sciences > Tzu-Chi University, Taiwan > > 0975-232-245 (C) > E-mail: weiweng...@gmail.com > 104726...@gms.tcu.edu.tw > > 2015-12-10 16:4

Re: [gmx-users] Bug in option parsing?

2015-11-17 Thread Kutzner, Carsten
https://gerrit.gromacs.org/#/c/5346/ should solve the issue. Carsten > On 17 Nov 2015, at 10:53, Åke Sandgren <ake.sandg...@hpc2n.umu.se> wrote: > > Nope, doesn't help > > On 11/17/2015 10:09 AM, Kutzner, Carsten wrote: >> Hi, >> >> there have b

Re: [gmx-users] Bug in option parsing?

2015-11-17 Thread Kutzner, Carsten
Hi, there have been changes in the way the command line options are parsed. You could put the argument to -npstring in quotation marks. I think -npstring " -n" (including the space before the " -" is accepted by the parser. Carsten > On 17 Nov 2015, at 08:58, Åke Sandgren

Re: [gmx-users] problem of compiling

2015-11-03 Thread Kutzner, Carsten
> On 03 Nov 2015, at 11:30, Albert wrote: > > Hello: > > I am trying to compile Gromacs-5.0.7 with command: > > > CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90 > CMAKE_PREFIX_PATH=/home/albert/install/intel/mkl/include/fftw:/home/albert/install/intel/impi >

Re: [gmx-users] Pull Code Error "unknown left hand"

2015-10-05 Thread Kutzner, Carsten
Hi, take a look at the mdout.mdp grompp output file. There you should see how the pull-related parameters are called in the version you were using. Maybe you just need to use minus signs instead of the underlines. Best, Carsten > On 05 Oct 2015, at 04:25, Stella Nickerson

Re: [gmx-users] Input files for performance analysis

2015-09-04 Thread Kutzner, Carsten
the ’supplements’ box on the right hand side of http://www.mpibpc.mpg.de/grubmueller/kutzner/publications Best, Carsten > > > On Wed, Aug 26, 2015 at 1:16 PM, Kutzner, Carsten <ckut...@gwdg.de> wrote: > >> Hi, >> >>> On 25 Aug 2015, at 20:23, Sabyasachi Sahoo

Re: [gmx-users] Input files for performance analysis

2015-08-26 Thread Kutzner, Carsten
Hi, On 25 Aug 2015, at 20:23, Sabyasachi Sahoo ssahoo.i...@gmail.com wrote: Hello all, I have good enough experience in high performance and parallel computing and would like to find out bottlenecks in various phases of GROMACS. Can anyone please give me links to ready-to run input files

Re: [gmx-users] no domain decomposition error while simulating membrane protein

2015-08-07 Thread Kutzner, Carsten
Hi, if you have a small simulation system or constraints over long distances 1 nm it could very well be that no domain decomposition can be found. Try to run using less domains (i.e. MPI ranks), or switch over to Gromacs 4.6 or 5.0, where you can use multiple OpenMP threads per MPI rank, thereby

Re: [gmx-users] no domain decomposition error while simulating membrane protein

2015-08-06 Thread Kutzner, Carsten
Dear Shabana, which version of GROMACS are you using? Carsten On 06 Aug 2015, at 11:24, shabana yasmeen shabana.yasmee...@gmail.com wrote: Dear users! I am working on membrane protein simulation but at volume coupling I get error of no main decomposition.. I tried alot but it ended

Re: [gmx-users] essential dynamics and flooding

2015-08-06 Thread Kutzner, Carsten
Hi Oskar, On 03 Aug 2015, at 11:27, Oskar Berntsson oskar.bernts...@gu.se wrote: Dear all, I am simulating a protein and I want to sample conformations that are not commonly sampled using an equilibrium simulation. As far as I figure, what I want to use is essential dynamics. I

Re: [gmx-users] High speed performance

2015-08-03 Thread Kutzner, Carsten
Hi Asma, whether your performance is good/expected depends on some other parameters you have not mentioned. What time step length do you use? What electrostatics method? What do you mean by “CPU with 5 nodes?”, I assume you are using 5 MPI ranks on a 40 core CPU? http://arxiv.org/abs/1507.00898

Re: [gmx-users] submitting gmx cluster

2015-07-31 Thread Kutzner, Carsten
Hi Aniko, On 31 Jul 2015, at 10:52, Lábas Anikó labasan...@gmail.com wrote: Dear Gromacs Users, I would like to submit my 'gmx cluster' job, but I don't know how can I define in my command line the following two options, which normally appear on the screen when I run gmx cluster on my

Re: [gmx-users] performance of e5 2630 CPU with gtx-titan GPU

2015-07-29 Thread Kutzner, Carsten
Hi Netaly, in this study http://arxiv.org/abs/1507.00898 are GROMACS performance evaluations for many CPU/GPU combinations. Although your combination is not among them, you could try to estimate its performance from similar setups. There is for example an E5-1620 CPU with a TITAN GPU. Although

Re: [gmx-users] Best step to run simulation over GPU?

2015-07-26 Thread Kutzner, Carsten
On 27 Jul 2015, at 04:53, 라지브간디 ra...@kaist.ac.kr wrote: p{margin:0;padding:0;} Thanks for the info. Can i put the command as below if i wanna run 3 simulation which has 24 processor with 1 GPU mdrun -deffnm first -multi 1 mdrun -deffnm second -multi 2 mdrun

Re: [gmx-users] Best step to run simulation over GPU?

2015-07-24 Thread Kutzner, Carsten
Hi, On 24 Jul 2015, at 08:48, RJ ra...@kaist.ac.kr wrote: Dear gmx, I have a single PC contains 24 threads with GTX 980Ti. I would like to know how do i run the 3 or 2 simulation in same time with above mentioned PC would have similar speed. Simply try it out :) It could be

Re: [gmx-users] Enforced rotation errors

2015-07-24 Thread Kutzner, Carsten
suggestions are gratefully appreciated. Thanks Anthony Dr Anthony Nash Department of Chemistry University College London On 20/07/2015 15:53, Kutzner, Carsten ckut...@gwdg.de wrote: Dear Anthony, the problem you are experiencing with the Œflex¹ rotation potential could be related

Re: [gmx-users] Enforced rotation errors

2015-07-20 Thread Kutzner, Carsten
Dear Anthony, the problem you are experiencing with the ‘flex’ rotation potential could be related to the rotation group moving too far along the direction of the rotation vector. As for V_flex, the slabs are fixed in space, the rotation group may after some time enter a region where no reference

Re: [gmx-users] Expected Performance of a GPU workstation

2015-07-15 Thread Kutzner, Carsten
Hi Jason, you might want to take a look at this study: http://arxiv.org/abs/1507.00898 Best, Carsten On 15 Jul 2015, at 06:44, Jason Loo Siau Ee jasonsiauee@taylors.edu.my wrote: Dear Gromacs users, I'm thinking about purchasing a GPU workstation for some simulation work, and

Re: [gmx-users] 4 Titan X or 2?

2015-05-22 Thread Kutzner, Carsten
On 22 May 2015, at 15:51, Albert mailmd2...@gmail.com wrote: Hello: I am going to perform MD simulation for a typical biological system with 60,000-80,000 atoms in all. Amber FF would be used for the whole system. I am just wondering will a 4x Titan X much faster than 2xTitan X? That

Re: [gmx-users] 4 Titan X or 2?

2015-05-22 Thread Kutzner, Carsten
Asus GTX TITAN X, 12 GB GDDR5 Asus GTX TITAN X, 12 GB GDDR5 Asus GTX TITAN X, 12 GB GDDR5 Supermicro Server tower / rack housing, 4 x GPU, 8 x hotswap bays 2000W redundant power supply On 05/22/2015 04:04 PM, Kutzner, Carsten wrote: That depends on what hardware you want to pair

Re: [gmx-users] Running Multiple simulation in single PC

2015-04-30 Thread Kutzner, Carsten
On 30 Apr 2015, at 12:04, RJ ra...@kaist.ac.kr wrote: Thanks Carsten. I already compiled with openmpi and it works fine. Great! But note that OpenMPI and OpenMP are different things. OpenMPI is an MPI library, which is not required for parallel runs on single nodes, since the Gromacs

Re: [gmx-users] Running Multiple simulation in single PC.

2015-04-29 Thread Kutzner, Carsten
On 29 Apr 2015, at 10:59, RJ ra...@kaist.ac.kr wrote: Dear all, I have a 64 bit PC with 16 processor with GTX460 gpu and wants to run multiple simulation. One simulation takes (~120 aa length) for 100ns about 4-5 days, whereas when i subject same protein into 100ns, it shows it

Re: [gmx-users] no error on missing cpt file

2015-04-29 Thread Kutzner, Carsten
Hi JIom, On 29 Apr 2015, at 11:50, gromacs query gromacsqu...@gmail.com wrote: Dear All, mdrun is not giving any error about missing cpt file. It runs using tpr from initial time zero. Sometimes my job get killed and I need to use cpt file in some script but if cpt is not found my job

Re: [gmx-users] 35% Variability in Simulation Performance

2015-04-29 Thread Kutzner, Carsten
Hi Mitchell, On 30 Apr 2015, at 03:04, Mitchell Dorrell m...@udel.edu wrote: Hi all, I just ran the same simulation twice (ignore the difference in filenames), and got very different results. Obviously, I'd like to reproduce the faster simulation. I expect this probably has to do with

Re: [gmx-users] Running Multiple simulation in single PC.

2015-04-29 Thread Kutzner, Carsten
On 30 Apr 2015, at 04:14, 라지브간디 ra...@kaist.ac.kr wrote: Dear Carsten, Thank you for suggestion. Should i need to compile with OpenMP even if i dont use cluster nodes? As i said its a single PC with 16 CPU and 1 GPU. It makes sense if you want to use all 16 cores together with a

Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-25 Thread Kutzner, Carsten
] On Behalf Of Kutzner, Carsten Sent: Friday, April 24, 2015 5:51 PM To: gmx-us...@gromacs.org Subject: Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5 Hi JJ, On 24 Apr 2015, at 10:53, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote: Hi Carsten, Thank you so much

Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-24 Thread Kutzner, Carsten
:46 + From: Kutzner, Carsten ckut...@gwdg.de To: gmx-us...@gromacs.org gmx-us...@gromacs.org Subject: Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5 Message-ID: fe42569f-159b-49b1-a644-86a866aaa...@mpibpc.mpg.de Content-Type: text/plain; charset=us-ascii Hi

Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-23 Thread Kutzner, Carsten
Hi, On 23 Apr 2015, at 08:03, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote: Dear all, My workstation specs are 2 x Intel Xeon E5-2695v2 2.40 GHz, 12 Cores. I would like to combine this with an optimal GPU setup for Gromacs 5 running simulations with millions of atoms. May I know

Re: [gmx-users] ftp.gromacs.org not working now

2015-04-22 Thread Kutzner, Carsten
Try wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.7.tar.gz Carsten On 22 Apr 2015, at 13:30, Vytautas Rakeviius vytautas1...@yahoo.com wrote: ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.7.tar.gz -- Gromacs Users mailing list * Please search the archive at

Re: [gmx-users] help: Gromacs

2015-04-20 Thread Kutzner, Carsten
Hi, What hardware to buy for GROMACS also depends a bit on what kind of MD simulations you intend to run on it (how big? one single simulation or lots of smaller ones?), and of course on how much money you can invest. The two choices you provide are very different in price; the more expensive