Hi Erik,
I am not sure that the CompEl code as it is can deal with the setup
you are describing (but I may be wrong).
Below are a few thoughts and things you might want to try.
> Am 24.04.2020 um 00:05 schrieb Erik Henze :
>
> Hi,
> I am attempting to study permeation events in an ion channel
> Am 26.03.2020 um 17:00 schrieb Tobias Klöffel :
>
> Hi Carsten,
>
>
> On 3/24/20 9:02 PM, Kutzner, Carsten wrote:
>> Hi,
>>
>>> Am 24.03.2020 um 16:28 schrieb Tobias Klöffel :
>>>
>>> Dear all,
>>> I am very new to Gromacs
Hi,
> Am 24.03.2020 um 16:28 schrieb Tobias Klöffel :
>
> Dear all,
> I am very new to Gromacs so maybe some of my problems are very easy to fix:)
> Currently I am trying to compile and benchmark gromacs on AMD rome cpus, the
> benchmarks are taken from:
>
Hi,
> Am 04.12.2019 um 17:53 schrieb Matthew Fisher
> :
>
> Dear all,
>
> We're currently running some experiments with a new hardware configuration
> and attempting to maximise performance from it. Our system contains 1x V100
> and 2x 12 core (24 logical) Xeon Silver 4214 CPUs which, after
Hi,
> Am 21.11.2019 um 17:15 schrieb Marcin Mielniczuk :
>
> Hi,
>
> I'm trying to make use of tune_pme to find out the optimal number of PME
> ranks. My command line is:
> UCX_LOG_LEVEL=info MPIRUN="mpirun" ./gmx_mpi tune_pme -v -s
> ../tip4p_min2 -mdrun "./gmx_mpi mdrun" -np 4
> When started
Hi,
is it intended that the thread-MPI version of mdrun 2018 does pin to its core
if started with -nt 1 -pin auto?
Carsten
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
partment.
Carsten
> I could use a piece of CompEL
> code to modify it for ion flux counting, maybe you have any suggestions?
>
> Harut
>
> On Thu, Sep 26, 2019 at 6:36 PM Kutzner, Carsten wrote:
>
>>
>>
>>> Am 26.09.2019 um 16:24 schrieb Harutyun Sa
> Am 26.09.2019 um 16:24 schrieb Harutyun Sahakyan :
>
> Dear Gromacs users,
>
> Computational electrophysiology protocol has some analyzing tools allowing
> to count ion flux etc. I have used an external electric field to simulate
> the process of ion permissions through a membrane channel.
Hi tarzan p,
> Am 02.09.2019 um 15:57 schrieb tarzan p :
>
> Hi allI am sure many of you guys would have run a benchmark of RIB system as
> given https://www.mpibpc.mpg.de/grubmueller/bench
>
>
> I could manage 12ns/day with a workstation with 2 x Gold 6148 (20 cores each)
> and 2 x V100
Hi Carlos,
> Am 20.08.2019 um 13:12 schrieb Carlos Navarro :
>
> Dear gmx-users,
> I'm currently investigating ion permeability in a channel that displays
> multiple cavities through ComEl simulations. Unfortunately one can just
> define a single cylinder per membrane.
Yes, this counting of ion
Hi Zhang,
> On 9. Jul 2019, at 15:16, 张驭洲 wrote:
>
> Hello,
>
> I want to know if there is a maximum number of atoms that GROMACS can
> simulate, or if there are any bugs in the code that cause the error in
> computing the coordinate of atoms when the number of atoms is very large? I'm
>
> On 2. Jul 2019, at 10:35, Phuong Tran wrote:
>
> Hi all,
>
> I was wondering if I can use the computational electrophysiology package in
> Gromacs with the double membrane setup to simulate a system that has
> different numbers of waters in and out. This would need to use the grand
>
Hi,
don’t spend all your money on a CPU - for high GROMACS performance
the GPU is as important. I would recommend to add an RTX 2070/80 GPU
to the workstation, and get a cheaper CPU. This will most likely
give you a significantly higher GROMACS performance.
See https://arxiv.org/abs/1903.05918
Dear Francesco,
> On 29. May 2019, at 10:01, Francesco Petrizzelli
> wrote:
>
>> A cyl0-r (and cyl1-r) of 0.7 nm is too small for a pore radius of 3 A to
>> reliably track the ions, some might sneak through your channel without being
>> recorded in the cylinder. Rather choose this value a
Dear Francesco,
> On 27. May 2019, at 16:31, Francesco Petrizzelli
> wrote:
>
> Dear gmx users,
>
> I'm trying to simulate change in the ion flux upon mutations using
> Computational electrophysiology (compEL).
> I have used packmol-memgen in order to generate the double bilayer using a
>
Hi Harutyun,
> On 3. Apr 2019, at 00:08, Harutyun Sahakyan wrote:
>
> Dear Gromacs users,
>
> I am trying to simulate membrane potential using computational
> electrophysiology protocol.
>
> I modeled protein-membrane system with charmm-gui, equilibrate it and run
> MD simulation. After some
Hi,
> On 25. Feb 2019, at 21:15, Schulz, Roland wrote:
>
> That's not my experience. In my experience , for single node runs, it is
> usually faster to use 72 OPENMP threads rather than using 72 (t)MPI threads.
wouldn’t a combination of OpenMP threads * (t)MPI threads be faster, e.g. 8x9?
> On 14. Feb 2019, at 08:19, Sujit Sarkar wrote:
>
> Dear gmx-users,
> Is Gromacs 2018.3 installation supported by Cuda 10.0 toolkit?
Yes.
Carsten
> Thanks,
> Sujit
> --
> Gromacs Users mailing list
>
> * Please search the archive at
>
Hi,
> On 7. Feb 2019, at 15:17, jing liang wrote:
>
> Hi,
>
> thanks for this information. I wonder if PME offload has been implemented
> for more than one
> node simulations? I tried the following command for running on two nodes
> (with 4 ranks each
> and 4 OpenMP threads)
>
> mpirun -np 8
Hi Nam Pho,
> On 16. Jan 2019, at 01:40, Nam Pho wrote:
>
> Hello GROMACS Users,
>
> My name is Nam and I support a campus supercomputer for which one of the
> major applications is GROMACS. I was curious if anyone has optimized
> servers for cost and has a blueprint for that,
yes, we did!
We
> On 7. Jan 2019, at 11:55, morpheus wrote:
>
> Hi,
>
> I am running simulations on a cluster that terminates jobs after a hard
> wall clock time limit. Normally this is not a problem as I just restart the
> simulations using -cpi state.cpt but for the last batch of simulations I
> got (for
Hi,
> On 18. Dec 2018, at 18:04, Zachary Wehrspan wrote:
>
> Hello,
>
>
> I have a quick question about how GROMACs 2018.5 distributes GPU resources
> across multiple nodes all running one simulation. Reading the
> documentation, I think it says that only 1 GPU can be assigned to the PME
>
> On 13. Dec 2018, at 22:31, pbusc...@q.com wrote:
>
> Carsten,
>
> A possible issue...
>
> I compiled gmx 18.3 with gcc-5 ( CUDA 9 seems to run normally ) Should
> recompile with gcc-6.4 ?
I don’t think that this will make a huge impact (but maybe you get a few extra
percent performance)
horter runs.
Carsten
> my results were a compilation of 4-5 runs each under slightly different
> conditions on two computers. All with the same outcome - that is ugh!. Mark
> had asked for the log outputs indicating some useful conclusions could be
> drawn from them.
>
> Pa
Hi Paul,
> On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
>
> Dear users ( one more try )
>
> I am trying to use 2 GPU cards to improve modeling speed. The computer
> described in the log files is used to iron out models and am using to learn
> how to use two GPU cards before purchasing
Hi Shreyas,
> On 26. Jun 2018, at 15:21, Shreyas Kaptan wrote:
>
> Dear All,
>
> I have a question regarding the make_edi tool. I used it some time back to
> restrain my structures to a particular projection value of an eigen vector
> with the -restrain keyword. However, I seem to have
> On 10. Apr 2018, at 18:13, Viveca Lindahl wrote:
>
> Thanks. It's running now. I just had a typo in my gmx_mpi variable that
> gave me the last error I posted. Using the non-mpi binary for tune_pme
> solved it.
BTW you might find the tune_pme -ntpr 1 switch useful if
Hi Viveca,
> On 10. Apr 2018, at 15:10, Viveca Lindahl wrote:
>
> Hi users,
>
> I never used gmx tune_pme before and thought I'd try. On a Cray machine,
> using aprun instead of mpirun, I did
>
> args="/cfs/klemming/nobackup/v/vivecal/programs/gromacs/2018.1/bin/gmx
>
Hi,
> On 16. Mar 2018, at 02:39, 가디 장데부 고라크스나트 wrote:
>
> Hello,I am performing computational Electrophysiology gromacs tutorial. I
> successfully pass the grompp for the compEL but failed with a Fatal error at
> mdrun step, not all ion group molecules consist of 3
; was no issue).
>
> Best,
> Dan
>
> On Mon, Feb 12, 2018 at 8:32 AM, Kutzner, Carsten <ckut...@gwdg.de> wrote:
>
>> Hi Dan,
>>
>>> On 11. Feb 2018, at 20:13, Daniel Kozuch <dan.koz...@gmail.com> wrote:
>>>
>>> Hello,
&g
Hi Dan,
> On 11. Feb 2018, at 20:13, Daniel Kozuch wrote:
>
> Hello,
>
> I was recently trying to use the tune_pme tool with GROMACS 2018 with the
> following command:
>
> gmx tune_pme -np 84 -s my_tpr.tpr -mdrun 'gmx mdrun’
Maybe you need to compile gmx without MPI (so
Dear Jason,
1.)
we have observed a similar behavior comparing Intel Silver 4114 against
E5-2630v4
processors in a server with one GTX 1080Ti. Both CPUs have 10 cores and run at
2.2 GHz. Using our standard benchmark systems (see
https://arxiv.org/abs/1507.00898)
we were able to get 74.4 ns/day
on.
Thanks a lot for the info!
Best,
Carsten
>
> Cheers,
>
> --
> Szilárd
>
> On Tue, Dec 12, 2017 at 3:07 PM, Kutzner, Carsten <ckut...@gwdg.de> wrote:
>
>> Hi,
>>
>> what are the expected performance benefits of AVX_512 SIMD instructions
Hi,
what are the expected performance benefits of AVX_512 SIMD instructions
on Intel Skylake processors, compared to AVX2_256? In many cases, I see
a significantly (15 %) higher GROMACS 2016 / 2018b2 performance when using
AVX2_256 instead of AVX_512. I would have guessed that AVX_512 is at least
Hi,
> On 30. Sep 2017, at 09:50, David van der Spoel wrote:
>
> On 29/09/17 15:10, Carlos Navarro wrote:
>> Dear all,
>> I’m currently running computational electrophysiology simulations which
>> need double bilayer systems.
>> I know that with gencof -f input.gro -nbox 1
> On 18. Jun 2017, at 13:40, Vries, de, H.W.
> wrote:
>
> dear all,
>
> I am employing the computational electrophysiology scheme in gromacs 5.1.4.
> I want to set a bulk-offset parameter, such that the scheme only does
> position exchanges in a region that is
Hi Charlie,
I just made a quick check with 5.1 and 2016 and I also see the problem that you
described. For me IMD works with 5.1, but not with 2016, but I don't know why.
Could you file a bug report?
Thank you,
Carsten
> On 14. Jun 2017, at 18:05, Charles Laughton
>
Hi,
the output file gives you the numbers of the atoms that seem to move from
compartment A to B without passing a channel. I would look at their
trajectories and inspect which path they actually take.
The channel trimer also needs to be whole at the beginning of the
simulation, otherwise the
Hi,
what was g_membed has been integrated into mdrun, and can be used with
gmx mdrun -membed settings.dat ... Look for the membed related output of
gmx mdrun -h
Or check the documentation, e.g.
Hi,
> On 14 Mar 2017, at 11:18, Vries, de, H.W.
> wrote:
>
> Dear all,
>
> I am currently trying to run the computational electrophysiology scheme on
> an implicit solvent, coarse-grained system by introducing a little
> workaround:
>
> In the manual, it is
t; Another interesting thing is that no one has any experience on this topic on
> the gmx user mailing list :(
>
> Any suggestions will be appreciated.
>
> Thanks in advance.
>
>> On 12 Jan 2017, at 18:34, "Kutzner, Carsten" <ckut...@gwdg.de> wrote:
>>
>
gt; gen_seed = -1
> gen_temp = 298.15
>
> ;FREE ENERGY
> free-energy = yes
> init-lambda = 1
> delta-lambda = 0
> sc-alpha = 0.3
> sc-power = 1
> sc-sigma = 0.25
> sc-coul
Dear Qasim,
those kinds of domain decomposition 'errors' can happen when you
try to distibute an MD system among too many MPI ranks. There is
a minimum cell length for each domain decomposition cell in each
dimension, which depends on the chosen cutoff radii and possibly
other inter-atomic
Hi Henry,
> On 18 Oct 2016, at 14:45, Vries, de, H.W.
> wrote:
>
> Dear all,
>
>
> In the newest versions of GROMACS the computational electrophysiology
> method (CompEL) is implemented through the swapping of solvent molecules
> with ions, thus providing a
> On 30 Sep 2016, at 16:31, Albert wrote:
>
> Hello:
>
> I've got a GPU workstation with two GPUs. I am just wondering which version
> will gain a better performance? The MPI version, the thread version, or the
> thread-openMP version? There are 20 threads (Intel(R)
Hi,
have a look at
https://www.mpibpc.mpg.de/15070156/Kutzner_2015_JCC.pdf
which should answer your questions!
Best,
Carsten
> On 08 Jul 2016, at 12:11, Nikhil Maroli wrote:
>
> Dear all,
>
>
>
> We have a budget of $12,000. for GPU Workstation. We will be running
> On 15 Jun 2016, at 17:12, Nikhil Maroli wrote:
>
> Dear all,
> I wanted to study the transport properties of channel protein in the lipid
> bilayer.is there any server or tool to make sandwiches of the lipid bilayer
> for studying the transport properties.
The supporting
e master version of GROMACS that you tried to compile?
For that version you will probably have to do some more adaptations of
your .mdp file.
There is no IMD for 4.5 and 4.6, you will need to use 5.0 or later.
Carsten
>
> Do you have idea about the error ??
>
> Thank you
> Pada t
Hi,
the error message you see has nothing to do with IMD. Have you tried
to install a Gromacs 5.0 or 5.1 or 2016 version? These should all
work out of the box with IMD.
Best,
Carsten
> On 09 Jun 2016, at 04:35, Andrian Saputra wrote:
>
> Dear gromacs users
>
> i
Hi,
> On 07 Jun 2016, at 22:00, jing liang wrote:
>
> Hi,
>
> the output of "gmx tune_pme" in perf.out file reports the following line at
> the end:
>
> Line tpr PME ranks Gcycles Av. Std.dev. ns/dayPME/f
> DD grid
>
> how can I interprete
Hi Maximilien,
depending on what exactly the questions are that you like to address, the
double-membrane
setup as used in computational electrophysiology setups might be helpful for
you.
There is a section in the GROMACS PDF manual about that, and there are these
two papers:
Hi,
you can use the -gpu_id command line parameter to mdrun to map available
GPUs to MPI ranks. In the simplest case with two MPI ranks:
mdrun -gpu_id 01
You will probably get much more performance by using more MPI ranks
(check out 10.1002/jcc.24030). E.g. for 6 MPI ranks use -gpu_id 000111
> On 27 Apr 2016, at 19:42, Jorge Fernández de Cossío Díaz
> wrote:
>
> I compiled gromacs 3.3.4, and pointed the g_correlation Makefile to its
> directory, But compilation of g_correlation still complains that it can't
> find "fatal.h". Any ideas?
it has been renamed
ct_04_01.html
Good luck :)
Carsten
>
> Thanks!
>
> J.
>
> 2016-04-26 12:45 GMT+02:00 Kutzner, Carsten <ckut...@gwdg.de>:
>>
>>> On 26 Apr 2016, at 11:22, James Starlight <jmsstarli...@gmail.com> wrote:
>>>
>>> Exactly th
lex_conf7/md_resp_complex_conf?.tpr
Carsten
>
> the same for trr
>
> J.
>
> 2016-04-26 11:11 GMT+02:00 Kutzner, Carsten <ckut...@gwdg.de>:
>>
>>> On 26 Apr 2016, at 11:00, James Starlight <jmsstarli...@gmail.com> wrote:
>>>
>>> Hel
> On 26 Apr 2016, at 11:00, James Starlight wrote:
>
> Hello,
>
> I faced with the folliwing problem:
>
> I try to make small script which will loop several folders
> corresponded to the invididual simulations and procecc each trajectory
> searching them by the keyword
Dear Hendrik,
that indeed looks a bit strange. If you provide the necessary input
files for a test, I could have a look at what might be wrong.
You can put them somewhere for download or email them directly
to me (not to the list, they will not be accepted).
Carsten
> On 30 Mar 2016, at 16:11,
Hi Nikhil,
please take a look at
http://onlinelibrary.wiley.com/doi/10.1002/jcc.24030/full
A great deal of your questions are answered there!
Best,
Carsten
> On 22 Mar 2016, at 06:52, Nikhil Maroli wrote:
>
> Dear all,
>
> we would like to purchase one GPU enabled
Dear David,
I think you will find answers to many of your questions in the following
publication:
http://onlinelibrary.wiley.com/doi/10.1002/jcc.24030/full
Best,
Carsten
> On 12 Mar 2016, at 02:52, David Berquist wrote:
>
> I'm looking into building a desktop
Hi,
you could try to manually set the number of PME nodes with -npme in 5.1.
Does that reproduce the 5.0 performance?
Carsten
> On 07 Feb 2016, at 19:57, Johannes Wagner wrote:
>
> hey guys,
> came across an issue with pp/pme ratio difference from 5.0 to 5.1. For a
Hi,
> On 10 Dec 2015, at 08:59, Felix W.-H. Weng wrote:
>
> Dear all:
>
> Has anyone succeeded in installing the command g_correlation in GROMACS
> 4.5.5?
Does it work with 3.3?
> File obtained from http://www.mpibpc.mpg.de/grubmueller/g_correlation
> I tried installing
gt;>>>>>>>>>>>>>>>>>>
> Wei-Hsiang Weng (翁偉翔), Master
> Department of Life Sciences
> Tzu-Chi University, Taiwan
>
> 0975-232-245 (C)
> E-mail: weiweng...@gmail.com
> 104726...@gms.tcu.edu.tw
>
> 2015-12-10 16:4
https://gerrit.gromacs.org/#/c/5346/
should solve the issue.
Carsten
> On 17 Nov 2015, at 10:53, Åke Sandgren <ake.sandg...@hpc2n.umu.se> wrote:
>
> Nope, doesn't help
>
> On 11/17/2015 10:09 AM, Kutzner, Carsten wrote:
>> Hi,
>>
>> there have b
Hi,
there have been changes in the way the command line options are parsed.
You could put the argument to -npstring in quotation marks. I think
-npstring " -n"
(including the space before the " -" is accepted by the parser.
Carsten
> On 17 Nov 2015, at 08:58, Åke Sandgren
> On 03 Nov 2015, at 11:30, Albert wrote:
>
> Hello:
>
> I am trying to compile Gromacs-5.0.7 with command:
>
>
> CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
> CMAKE_PREFIX_PATH=/home/albert/install/intel/mkl/include/fftw:/home/albert/install/intel/impi
>
Hi,
take a look at the mdout.mdp grompp output file. There you should see
how the pull-related parameters are called in the version you were using.
Maybe you just need to use minus signs instead of the underlines.
Best,
Carsten
> On 05 Oct 2015, at 04:25, Stella Nickerson
the ’supplements’ box on the right hand side of
http://www.mpibpc.mpg.de/grubmueller/kutzner/publications
Best,
Carsten
>
>
> On Wed, Aug 26, 2015 at 1:16 PM, Kutzner, Carsten <ckut...@gwdg.de> wrote:
>
>> Hi,
>>
>>> On 25 Aug 2015, at 20:23, Sabyasachi Sahoo
Hi,
On 25 Aug 2015, at 20:23, Sabyasachi Sahoo ssahoo.i...@gmail.com wrote:
Hello all,
I have good enough experience in high performance and parallel computing
and would like to find out bottlenecks in various phases of GROMACS. Can
anyone please give me links to ready-to run input files
Hi,
if you have a small simulation system or constraints over long distances 1 nm
it could very well be that no domain decomposition can be found.
Try to run using less domains (i.e. MPI ranks), or switch over to Gromacs 4.6
or 5.0, where you can use multiple OpenMP threads per MPI rank, thereby
Dear Shabana,
which version of GROMACS are you using?
Carsten
On 06 Aug 2015, at 11:24, shabana yasmeen shabana.yasmee...@gmail.com wrote:
Dear users!
I am working on membrane protein simulation but at volume coupling I get
error of no main decomposition.. I tried alot but it ended
Hi Oskar,
On 03 Aug 2015, at 11:27, Oskar Berntsson oskar.bernts...@gu.se wrote:
Dear all,
I am simulating a protein and I want to sample conformations that are not
commonly sampled using an equilibrium simulation. As far as I figure, what I
want to use is essential dynamics.
I
Hi Asma,
whether your performance is good/expected depends on some other
parameters you have not mentioned. What time step length do you
use? What electrostatics method? What do you mean by “CPU with
5 nodes?”, I assume you are using 5 MPI ranks on a 40 core CPU?
http://arxiv.org/abs/1507.00898
Hi Aniko,
On 31 Jul 2015, at 10:52, Lábas Anikó labasan...@gmail.com wrote:
Dear Gromacs Users,
I would like to submit my 'gmx cluster' job, but I don't know how can I
define in my command line the following two options, which normally appear
on the screen when I run gmx cluster on my
Hi Netaly,
in this study http://arxiv.org/abs/1507.00898 are GROMACS performance
evaluations for many CPU/GPU combinations. Although your combination
is not among them, you could try to estimate its performance from
similar setups.
There is for example an E5-1620 CPU with a TITAN GPU. Although
On 27 Jul 2015, at 04:53, 라지브간디 ra...@kaist.ac.kr wrote:
p{margin:0;padding:0;}
Thanks for the info.
Can i put the command as below if i wanna run 3 simulation which has 24
processor with 1 GPU
mdrun -deffnm first -multi 1
mdrun -deffnm second -multi 2
mdrun
Hi,
On 24 Jul 2015, at 08:48, RJ ra...@kaist.ac.kr wrote:
Dear gmx,
I have a single PC contains 24 threads with GTX 980Ti.
I would like to know how do i run the 3 or 2 simulation in same time with
above mentioned PC would have similar speed.
Simply try it out :)
It could be
suggestions are gratefully appreciated.
Thanks
Anthony
Dr Anthony Nash
Department of Chemistry
University College London
On 20/07/2015 15:53, Kutzner, Carsten ckut...@gwdg.de wrote:
Dear Anthony,
the problem you are experiencing with the Œflex¹ rotation potential
could be related
Dear Anthony,
the problem you are experiencing with the ‘flex’ rotation potential
could be related to the rotation group moving too far along the direction
of the rotation vector. As for V_flex, the slabs are fixed in space,
the rotation group may after some time enter a region where no reference
Hi Jason,
you might want to take a look at this study:
http://arxiv.org/abs/1507.00898
Best,
Carsten
On 15 Jul 2015, at 06:44, Jason Loo Siau Ee jasonsiauee@taylors.edu.my
wrote:
Dear Gromacs users,
I'm thinking about purchasing a GPU workstation for some simulation work, and
On 22 May 2015, at 15:51, Albert mailmd2...@gmail.com wrote:
Hello:
I am going to perform MD simulation for a typical biological system with
60,000-80,000 atoms in all. Amber FF would be used for the whole system.
I am just wondering will a 4x Titan X much faster than 2xTitan X?
That
Asus GTX TITAN X, 12 GB GDDR5
Asus GTX TITAN X, 12 GB GDDR5
Asus GTX TITAN X, 12 GB GDDR5
Supermicro Server tower / rack housing, 4 x GPU, 8 x hotswap bays
2000W redundant power supply
On 05/22/2015 04:04 PM, Kutzner, Carsten wrote:
That depends on what hardware you want to pair
On 30 Apr 2015, at 12:04, RJ ra...@kaist.ac.kr wrote:
Thanks Carsten.
I already compiled with openmpi and it works fine.
Great! But note that OpenMPI and OpenMP are different things.
OpenMPI is an MPI library, which is not required for parallel runs
on single nodes, since the Gromacs
On 29 Apr 2015, at 10:59, RJ ra...@kaist.ac.kr wrote:
Dear all,
I have a 64 bit PC with 16 processor with GTX460 gpu and wants to run
multiple simulation.
One simulation takes (~120 aa length) for 100ns about 4-5 days, whereas when
i subject same protein into 100ns, it shows it
Hi JIom,
On 29 Apr 2015, at 11:50, gromacs query gromacsqu...@gmail.com wrote:
Dear All,
mdrun is not giving any error about missing cpt file. It runs using tpr
from initial time zero. Sometimes my job get killed and I need to use cpt
file in some script but if cpt is not found my job
Hi Mitchell,
On 30 Apr 2015, at 03:04, Mitchell Dorrell m...@udel.edu wrote:
Hi all, I just ran the same simulation twice (ignore the difference in
filenames), and got very different results. Obviously, I'd like to
reproduce the faster simulation. I expect this probably has to do with
On 30 Apr 2015, at 04:14, 라지브간디 ra...@kaist.ac.kr wrote:
Dear Carsten,
Thank you for suggestion.
Should i need to compile with OpenMP even if i dont use cluster nodes? As i
said its a single PC with 16 CPU and 1 GPU.
It makes sense if you want to use all 16 cores together with a
] On Behalf Of
Kutzner, Carsten
Sent: Friday, April 24, 2015 5:51 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5
Hi JJ,
On 24 Apr 2015, at 10:53, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg
wrote:
Hi Carsten,
Thank you so much
:46 +
From: Kutzner, Carsten ckut...@gwdg.de
To: gmx-us...@gromacs.org gmx-us...@gromacs.org
Subject: Re: [gmx-users] Optimal GPU setup for workstation with
Gromacs 5
Message-ID: fe42569f-159b-49b1-a644-86a866aaa...@mpibpc.mpg.de
Content-Type: text/plain; charset=us-ascii
Hi
Hi,
On 23 Apr 2015, at 08:03, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote:
Dear all,
My workstation specs are 2 x Intel Xeon E5-2695v2 2.40 GHz, 12 Cores. I would
like to combine this with an optimal GPU setup for Gromacs 5 running
simulations with millions of atoms. May I know
Try
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.7.tar.gz
Carsten
On 22 Apr 2015, at 13:30, Vytautas Rakeviius vytautas1...@yahoo.com wrote:
ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6.7.tar.gz
--
Gromacs Users mailing list
* Please search the archive at
Hi,
What hardware to buy for GROMACS also depends a bit on what kind of MD
simulations
you intend to run on it (how big? one single simulation or lots of smaller
ones?),
and of course on how much money you can invest.
The two choices you provide are very different in price; the more expensive
91 matches
Mail list logo