On Wed, Apr 11, 2012 at 6:16 PM, Mark Abraham wrote:
> On 12/04/2012 1:42 AM, haadah wrote:
>>
>> Could you clarify what you mean by "Sounds like an MPI configuration
>> program.
>> I'd get a test program running on 18 cores before worrying about anything
>> else."? My problem is that i can't get
Hi,
> thought you can help with this...
I'd appreciate if you kept future conversation on the mailing list.
> The GROMACS website says Geforce GTX 560 Ti is supported. How about GTX 560?
Yes, they are. All Fermi-class cards are supported.
--
Szilárd
> Thanks,
> G
>
> --
> Gaurav Goel, PhD
>
On a single quad 16-core Interlagos node with system size 34k atoms I
get 55-60 ns/day and even with 24k atoms ~70 ns/day (solvated protein
amber99 + tip3p) . On three of these nodes I think you can get close
to 100 ns/day, but it will be close to the scaling limit.
Have you considered vsites with
Just try and you'll see, it's not like -v is a dangerous command line option!
--
Szilárd
On Fri, Mar 30, 2012 at 1:10 AM, Acoot Brett wrote:
> Dear All,
>
> There is a tutorial says the function of "-v" in mdrun is to make the md
> process visible in the screen, which is correct.
>
> However fr
Hi Hubert,
> With a coworker, we recently developed a plugin Vi to manipulate easily
> Gromacs files.
> It enables syntax highlighting for Gromacs files. For the moment, it works
> with mdp, gro, top/itp and ndx files.
> It contains also macros to comment/uncomment easily selections of a file.
Th
The docs actually tells you:
"Native GPU acceleration is supported with the verlet cut-off scheme
(not with the group scheme) with PME, reaction-field, and plain
cut-off electrostatics."
(http://www.gromacs.org/Documentation/Parallelization_and_acceleration#GPU_acceleration)
Use the cutoff-scheme
Hi Sara,
The bad performance you are seeing is most probably caused by the
combination of the new AMD "Interlagos" CPUs, compiler, operating
system and it is very likely the the old Gromacs version also
contributes.
In practice these new CPUs don't perform as well as expected, but that
is partly
I fully agree with David, it's great to have independent benchmarks!
In fact, already the the previous version of the report has been of
great use for us, we have referred to the results in a few occasions.
--
Szilárd
On Thu, Mar 15, 2012 at 2:37 PM, Hannes Loeffler
wrote:
> Dear all,
>
> we
Hi,
Could you compile in Debug mode and run mdurn-gpu in gdb, it will tell
more about the location and type of exception.
--
Szilárd
On Mon, Mar 12, 2012 at 8:47 AM, TH Chew wrote:
> Hi all,
>
> I am trying to get the GPU version of gromacs running. I manage to install
> OpenMM, compile and i
Hi,
That sounds like a memory leak. Could you please file a bug report in
redmine preferably with:
- configure options (+config.log or CMakeCache.txt would be ideal)
- compiler version, external library versions (fft, blas, lapack)
- input (tpr at least)
- full mdrun command line
Cheers,
--
Szilá
Hi Efrat,
It indeed looks like a memory leak.
Could you please file a bug on redmine.gromacs.org?
Cheers,
--
Szilárd
On Sun, Mar 4, 2012 at 12:21 PM, Efrat Exlrod wrote:
> Hi Szilard,
>
> Thanks for your reply.
> I used your script and I think it does look as a memory leak. Please look at
>
Hi,
First I thought that there might be a memory leak which could have
caused this if you ran for really long time. However, I've just ran
the very same benchmark (dhfr with PME) for one hour, monitored the
memory usage and I couldn't see any change whatsoever (see the plot
attached).
I've attach
HI Adam,
> I'm trying to run a mdrun-gpu simulation on a 64-bit Ubuntu system.
> I'm using a 15GB GTX 580 NVIDIA GPU card with all the appropriate drivers
> and cuda toolkit.
> However, when I run the command:
>
> mdrun-gpu -s inpufile.tpr -c inputfile.gro -x outputfile.xtc -g
> outputfile.log -n
19, 2012 at 8:45 PM, aiswarya pawar
> wrote:
>>
>> Has the tesla card got to do anything with the error. Am using Nvidia
>> Tesla S1070 1U server.
>>
>>
>> On Thu, Jan 19, 2012 at 8:37 PM, Szilárd Páll
>> wrote:
>>>
>>> And sort
yone knows what's going wrong.
>>
>>
>> No, but you should start trying to simplify what you're doing to see where
>> the problem lies. Does normal mdrun work?
>>
>> Mark
>>
>>
>> Thanks
>> Sent from my BlackBerry® on Reliance Mo
Hi,
Most of those are just warnings, the only error I see there comes from
the shell, probably an error in your script.
Cheers,
--
Szilárd
On Wed, Jan 18, 2012 at 12:27 PM, aiswarya pawar
wrote:
> Hi users,
>
> Am running mdrun on gpu . I receive an error such as=
>
> WARNING: This run will g
Hi,
Teslas should be superior when it comes to reliability, but otherwise,
for Gromacs 4.6 GeForce cards are perfectly fine and no Tesla-specific
features will provide performance benefits. The only exception is
GPUDirect for InfiniBand which might not work with consumer boards --
, although I don
palardo
> Dept. Quimica Fisica, Univ. de Sevilla (Spain)
>
> On Tue, 29 Nov 2011 22:04:08 +0100, Szilárd Páll wrote:
>>
>> Hi Andrzej,
>>
>>> One more question: will a ratio of gpu/cpu units and cores be of
>>> importance
>>> in next gromacs releases
Hi,
I tried a Pathscale 4.0.12 nightly and except a few warnings
compilation went fine. I don't have 4.0.11 around, though.
However, mdrun segfaults at the very end of the run while generating
the cycle and time counter table. I don't have time to look into this,
but I'll get back to the issue wh
Hi,
Pathscale seems to be as fast as gcc 4.5 on AMD Barcelona and the
-march=barcelona option unfortunately doesn't seem to help much.
However, I didn't try any other compiler optimization options.
We do have several Magny-Cours machines around we can benchmark on,
but thanks for the offer!
Chee
On Thu, Dec 1, 2011 at 4:49 PM, Teemu Murtola wrote:
> On Thu, Dec 1, 2011 at 16:46, Szilárd Páll wrote:
>>> With the pgi compiler, I am most concerned about this floating point
>>> overflow warning:
>>>
>>> ...
>>> [ 19%] Building C object s
Hi,
I've personally never heard of anybody using gromacs compiled with PGI.
> I am using a new cluster of Xeons and, to get the most efficient
> compilation, I have compiled gromacs-4.5.4 separately with the intel,
> pathscale, and pgi compilers.
I did try Pathscale a few months ago and AFAIR it
Hi Andrzej,
> One more question: will a ratio of gpu/cpu units and cores be of importance
> in next gromacs releases ? at the moment the code uses one core per gpu
> unit, wright ? When the code is gpu parallel how can this change ?
Yes, it will. We use both CPU & GPU and load balance between the
Thanks for the info.
--
Szilárd
On Tue, Nov 29, 2011 at 11:50 AM, Andrzej Rzepiela
wrote:
> Hey,
>
> Thank you for the info. The data that I obtained for comparison was
> performed with GTX580, 4 fs timestep and heavy hydrogen atoms instead of
> constraints, as you suspected. For dhfr with PME
allation and usage?
>
> Thanks a lot!
>
> Sincerely yours,
>
> Jones
>
>
>>
>> --
>> Szilárd
>>
>>
>>
>> On Sun, Nov 27, 2011 at 11:25 PM, Alexey Shvetsov
>> wrote:
>> > Hi!
>> >
>&g
> Will it use CUDA or OpenCL? Second one will be more common since it will
> work with wider range of platfroms (cpu, gpu, fcpga)
>
> Szilárd Páll писал 27.11.2011 23:50:
>>
>> Native acceleration = not relying on external libraries. ;)
>>
>> --
>> Szilárd
n 2011-11-27 12:10:47PM -0600, Szilárd Páll wrote:
>> Hi Andrzej,
>>
>> GROMACS 4.6 is work in progress, it will have native CUDA acceleration
>> with multi-GPU support along a few other improvements. You can expect
>> a speedup in the ballpark of 3x. We will soon hav
Hi Andrzej,
GROMACS 4.6 is work in progress, it will have native CUDA acceleration
with multi-GPU support along a few other improvements. You can expect
a speedup in the ballpark of 3x. We will soon have the code available
for testing.
I'm a little skeptical about the 5x of ACEMD. What setting di
Hi,
I don't remember any incident related to tools crashing, but I do
recall a problem which initially was attributed to a known gcc 4.1 bug
(http://redmine.gromacs.org/issues/431), but it turned out to be a GB
bug.
However, knowing that there is such a nasty bug in gcc 4.1, we thought
it's bette
On Tue, Nov 8, 2011 at 11:59 PM, Mark Abraham wrote:
> On 8/11/2011 11:35 PM, Szilárd Páll wrote:
>
>> Hi,
>>
>> There have been quite some discussion on the topic of GROMACS on
>> Cygwin so please search the mailing list for information.
>>
>
>
Hi,
There have been quite some discussion on the topic of GROMACS on
Cygwin so please search the mailing list for information.
Some of that information might have not gone into the wiki
(http://goo.gl/ALQuC) - especially that the page appears to be intact
for the last 7 months. [Which is a pity a
On Mon, Oct 31, 2011 at 1:06 PM, Mark Abraham wrote:
> On 30/10/2011 8:56 AM, Mirco Wahab wrote:
>>
>> On 24.10.2011 23:23, Szilárd Páll wrote:
>>>
>>> I've just realized that both you and the similar report you linked to
>>> were using CMake 2.
ind entering it as a bug in redmine.gromacs.org. I'll look
into the issue in the coming days.
Cheers,
--
Szilárd
On Sat, Oct 29, 2011 at 11:56 PM, Mirco Wahab
wrote:
> On 24.10.2011 23:23, Szilárd Páll wrote:
>>
>> I've just realized that both you and the similar repor
oint have no atoms in
> VMD. So, that's probably not a good thing.
Wow, that sounds crazy. What driver version are you using? Try to
update your device driver + nvidia-settings - I've been using
285.05.05/09 without problems.
--
Szilárd
> Thanks,
> Matt
>
> On Mon,
Hi,
> Thank you very much. I removed -xHOST from CFLAGS and FFLAGS, and now it
> runs correctly.
Good that it worked. Still it's bizarre that icc failed at compiling
the code it generated...
FYI: removing the flag might result in slightly slower binaries, but
the difference should be quite small
Hi,
Firstly, you're not using the latest version and there might have been
a fix for your issue in the 4.5.5 patch release.
Secondly, you should check the http://redmine.gromacs.org bugtracker
to see what bugs have been fixed in 4.5.5 (ideally the target version
should tell). You can also just do
I've just realized that both you and the similar report you linked to
were using CMake 2.8.3. If you don't succeed could you try another
CMake version?
--
Szilárd
On Mon, Oct 24, 2011 at 11:14 PM, Szilárd Páll wrote:
> Please keep all discussions on the mailing list! Also, I'
Please keep all discussions on the mailing list! Also, I'm also CC-ing
to the gmx-devel list, maybe somebody over there has an better what
causes your CMake issue.
>>> //Flags used by the compiler during all build types
>>> CMAKE_C_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99 '
>>>
>>>
Hi Matt,
Yes, you should use the "force-device=yes" option, the patch which was
meant to update the list of compatible GPUs didn't make it for 4.5.5.
Cheers,
--
Szilárd
On Sun, Oct 23, 2011 at 10:24 PM, Matt Larson wrote:
> I am having an error trying to use a compiled mdrun-gpu on my GPU set
Hi,
The error messages are all referring to SSE 4.1 packed integer min/max
operations not being recognized. I assume that these were enabled by
the "-xHOST" compiler option, and icc automatically generated these
instructions - the files it's complaining about are even temporary
files.
Could it be
> --- [CMakeCache.txt] -
>
> ...
>
> //Flags used by the compiler during all build types
> CMAKE_CXX_FLAGS:STRING=' -msse2 -ip -funroll-all-loops -std=gnu99 '
>
> //Flags used by the compiler during release builds.
> CMAKE_CXX_FLAGS_RELEASE:STRING=-mtune
Hi Nathalia,
Right, gcc 4.1 is quite controversial as these is bug in it which is
though to be causing mdrun crashes. So you better stay away from 4.1
as well as from other old gcc versions. I'd recommend 4.5 or 4.6 as
these have gotten really good, even compared to icc - at least when it
comes to
erformance point of view the 570 way better and depending on
the use case even a 560 can be a decent and cheap option.
--
Szilárd
> On Wed, Oct 12, 2011 at 9:54 AM, Szilárd Páll wrote:
>> Dear Stephan,
>>
>>> Radeons work as well. You can put a 3-4 GPU board together w
Dear Stephan,
> Radeons work as well. You can put a 3-4 GPU board together with the highest
> end AMD or Intel chip for 3K, plus 16G RAM if you look around for a day or
> two, but the cooling is the main problem (with 1/4 the price radeons Vs. GTI
> cards), so one has to take cooling into acco
Hi Gregory,
I am not very familiar with the could computing offerings, but as far
as I know, in general they are not a very cheap solution when it comes
to relatively low usage (non massive enterprise use).
If you would need it only for you own research, you might be better
off with applying for
Hi,
Based on the message it seem that autom4te (part of the autoconf
tools) can't write some temporary file to the standard temp location
/tmp. That would quite strange as if the temp directory is not there
$TMPDIR should be defined, but I suspect it's not, otherwise autom4te
would have picked it
Hi,
I have not followed the entire discussion so I might be completely
wrong, I might be fill in some gaps.
> Firstly, including config.h inside the fortran .F kernel files for power6 is
> causing problems with
> their parsing using xlf. adding -WF,-qfpp didn't help. Had to provide a
> modified x
It is true that on Intel CPUs with HT supported and on you get an up
to 10-15% speedup if you also use all virtual cores wrt to running
only as many threads as real cores. Additionally, as the OS reports
all virtual processors, by Gromacs will use all of them by default,
i.e. will run with 8 thread
Hi,
Gromacs 4.5.x works only with OpenMM 2.x. You might be able to use
CUDA 4.0, but you will probably have to recompile OpenMM from source.
Cheers,
--
Szilárd
On Fri, Aug 19, 2011 at 2:23 AM, Park, Jae Hyun nmn wrote:
> Dear GMX users,
>
>
>
> I am installing GMX 4.5.3 with GPU.
>
> But, the
Hi,
Tesla cards won't give you much benefit when it comes to running the
current Gromacs. Additionally, I can tell you so much that this won't
change in the future either. The only advantage of the C20x0-s is ECC
and double precision - which is ATM anyway not supported in Gromacs on
GPUs.
Gromacs
Hi,
As this is not a development-related question I'm moving the
discussion to the user's list. Future replies should be sent *only* to
gmx-users@gromacs.org.
As Axel pointed out, the list of CUDA-compatible devices is much
broader than the list of cards we label compatible. The compatibility
che
I think you made the right decision! :)
--
Szilárd
On Fri, Jul 8, 2011 at 12:50 PM, Андрей Гончар wrote:
> Thanks a lot!
> Now we decided to use gromacs under linux and the installation of
> gromacs and gromacs-gpu has passed without errors
> Problem is solved :)
>
> 201
how-to about step-by-step compiling of gpu-accelerated gromacs under
> windows, because now i'm totally confused... Thanks in advance!
>
> 2011/6/30 Szilárd Páll :
>> Dear Andrew,
>>
>> Compiling on Windows was tested only using MSVC and I have no idea if
>> it works
Additionally, if you care about a few percent extra performance, you
should use gcc 4.5 or 4.6 for compiling Gromacs as well as FFTW
(unless you have a bleeding-edge OS which was built with any of these
latest gcc versions). While you might not see a lot of improvement in
mdrun performance (wrt gcc
Dear Andrew,
Compiling on Windows was tested only using MSVC and I have no idea if
it works or not under cygwin. You should just try, both cmake and gcc
is available for cygwin so you might be lucky and get mdrun-gpu
compiled without any additional effort.
All binaries on the Gromacs webpage _are
Hi,
That compiler is ancient (thought it might have SSE2 support) as well
as the OS, I guess (RHEL 3?). Still, the CPU does support SSE2 so if
you can get a gcc 4.1 or later on it you should still be able compile
and run the code without a problem.
--
Szilárd
2011/5/26 Hsin-Lin Chiang :
> Hi,
Hi Hsin-Lin,
Your problem are caused by a missing header file which is included by
the nobonded SSE kernels which is indicated by the first error in your
output:
nb_kernel400_ia32_sse.c:22:23: emmintrin.h: No such file or directory
This header is needed for SSE and SSE2, but for some reason you
Hi Matt,
Just wanted to warn you that AFAIK Gromacs 4.5.x was not tested with
gcc 4.0.1, we've done testing only with 4.1 and later. This doesn't
mean that it shouldn't work, but if you encounter anything strange,
the first thing you should do is to get a newer gcc.
Also, your tMPI errors might b
Hi,
You should really use fftw, it's *much* faster! (Btw: fftpack comes
with Gromacs.)
I'm not sure what versions of Gromacs are available through the RHEL
packages, but as installing from source is not very difficult
(especially for a sysadmin) you might be better of with getting a
fresh install
> nstlist is more directly related to dt, and is often connected to the force
> field, as well. Values of 5-10 are standard. Setting nstlist = 1 is
> usually only necessary for EM, not MD. Excessive updating of the neighbor
> list can result in performance loss, I believe.
Not only can, but it
Just some hints:
- Does the target platform (32/64 bit) or the fftw libs and Gromacs
build match?
- Try to do a make clean first and (/or?) and reconfigure with
--enable-shared. Not sure that it will help, but I vaguely remember
something.
- Try CMake.
Cheers,
--
Szilárd
On Tue, May 10, 2011
Hi,
Good that managed to solve the issue!
Even though it might have been caused by a bug in the autoconf
scripts, as these are the final days for the autoconf support in
Gromacs, I see a very slim change that it will get
investigated/fixed.
--
Szilárd
On Tue, May 10, 2011 at 4:19 PM, wrote
Hi,
> Is there any common problem for an compilation with the intel compiler
> suite 12.0 and FFTW 3.2.2/Gromacs 4.5.4
Not that I know of, I've never had/head about such issues.
I just checked and I couldn't reproduce your issue. Tried with both
autoconf and CMake and with:
Intel Compilers: v12.
Hi Claus,
> be supported in future versions. Yet, in the openmm website,
> the new version of openmm (3.0 that is) is supposed to support both cuda and
> opencl framework alongside gromacs:
> (https://simtk.org/project/xml/downloads.xml?group_id=161)
What do you mean by "alongside gromacs"?
> 1)
Hi,
> Thanks for the reply. The part I modified is the implicit solvent
> part, particularly the Still model. Also I modified a part of nonbonded
> kernel, not sse ones. So I assumed I have to link GROMACS with OPENMM
> using customGBforce?
You mean that you modified the CUDA kernel inside OpenMM
Hi Sebastian,
Are you sure that the NVIDIA driver works and is compatible with the
CUDA version you have? What does "nvidia-smi -L -a" output? Have you
tried running something, for instance an SDK sample?
Cheers,
--
Szilárd
2011/4/18 SebastianWaltz :
> Dear gromacs user,
>
> I have some proble
Hi,
As Justin said, it's probably the imbalance which is causing the
slowdown. You can take a look at the statistics at the end of the log
file. One simple thing you could do is to compare the log file of your
long, gradually slowing down run with a shorter run to see which part
takes more time.
gt;
> Jordi Inglés
>
> El 29/03/11 13:00, Szilárd Páll escribió:
>>
>> Hi,
>>
>> I've diff-ed the cache file you sent me against the one I generated
>> and I couldn't see anything relevant. Neither does the verbose CMake
>> output suggest anything
ctory:
>
> -rwxr-xr-x 1 root root 3433307 Mar 29 08:31 mdrun-gpu
>
> I attach also the CMakeCache list if it can help.
>
> Thanks for any advise!
>
> jordi inglés
>
> El 28/03/11 21:50, Szilárd Páll escribió:
>>
>> Hi Jordi,
>>
>> I've never
Hi Jordi,
I've never seen this error or anything similar, but I can give you hints.
The CMake build system first generates binaries in the build directory
which are also runnable from there (are linked against libs located in
the build three). When you do a "make install[-mdrun]", howerver,
these
> I am a bit confused on the g_bar part, when it says -f expects multiple
> dhdl files. Do we need to run still multiple independent simulations
> using different foreign_lambda values? I do not see why we should run
> independent simulations, if we use for couple-lambda0 and couple-lambda1
> vdw-q
> 1- I am wondering for each interval say
>
> Interval 2:
> init_lambda = 0.05
> foreign_lambda = 0 0.1
>
> how this foreign_lambda 0 is related to the init_lambda = 0 from interval 1
> and the same for foreign_lambda = 0.1 in interval 2 and init_lambda = 0.1 in
> interval 3? Is there a physical me
Hi,
What is the error you are getting? What is unfortunate about
temperature coupling?
Have you checked out the part of the documentation, especially the
supported features on GPUs part
(http://www.gromacs.org/gpu#Supported_features)?
--
Szilárd
On Mon, Mar 7, 2011 at 3:43 PM, kala wrote:
>
Dear Natalia,
The current mdrun-gpu (which btw uses OpenMM) is capable to run a
single simulation, on a single node, using a single GPU only.
Cheers,
--
Szilárd
On Fri, Mar 4, 2011 at 6:28 PM, Nathalia Garces wrote:
> Hello,
> I want to know if it is possible to use "mdrun-gpu" with the comma
Hi,
There are two things you should test:
a) Does your NVIDIA driver + CUDA setup work? Try to run a different
CUDA-based program, e.g. you can get the CUDA SDK and compile one of
the simple programs like deviceQuery or bandwidthTest.
b) If the above works, try to compile OpenMM from source with
hip, your max clockrate tends to be lower.
> >As such, its really important to know how your jobs are bound so that
> >you can order a cluster configuration that'll be best for that job.
>
>
> Cheers, Maryam
>
> --- On *Tue, 18/1/11, Szilárd Páll
>
> * wrote
Hi,
Although the question is a bit fuzzy, I might be able to give you a
useful answer.
>From what I see on the whitepaper of the Poweredge m710 baldes, among
other (not so interesting :) OS-es, Dell provides the options of Red
Had or SUSE Linux as factory installed OS-es. If you have any of
these
Hi,
Currently there is no concrete plan to implement FEP on GPUs. AFAIK
there is an OpenMM plugin which could be integrated, but I surely
don't have time to work on that and I don't know of anyone else
working on it. Contribution would be welcome, though!
Regards,
--
Szilárd
On Thu, Nov 11, 20
Hi,
I've never seen/had my hands on the Tesla T10 so I didn't know that's
the name it reports. I'll fix this for the next release. Rest assured
that on this hardware Gromacs-GPU should run just fine.
On the other hand, your driver version is very strange: CUDA Driver
Version = 4243455, while it s
Hi Solomon,
Just stumbled upon your mail and I thought you could still use a
answer to your question.
First of all, as you've probably read on the Gromacs-GPU page, a) you
need a high-performance GPU to achieve good performance (in comparison
to the CPU) -- that's the reason for the strict compat
Hi,
Tesla C1060 and S1070 should is definitely supported so it's strange
that you get that warning. The only thing I can think of is that for
some reason the CUDA runtime reports the name of the GPUS other than
C1060/S1070. Could you please run the deviceQuery from the SDK and
provide the output h
Hi,
If you take a look at the mdp file, it becomes obvious that the
simulation length is infinite:
nsteps = -1
This is useful for a benchmarking setup where you want to run e.g.
~10min case in which you'r use the "-maxh 0.167" mdrun option.
Cheers,
--
Szilárd
On Tue, Nov 23, 201
Hi Solomon,
> [100%] Building C object src/kernel/CMakeFiles/mdrun.dir/md_openmm.c.o
> Linking CXX executable mdrun-gpu
> ld: warning: in /usr/local/openmm/lib/libOpenMM.dylib, file was built for
> i386 which is not the architecture being linked (x86_64)
The above linker message clearly states wh
You can try the systems we provided on the GROMACS-GPU page:
http://www.gromacs.org/gpu#GPU_Benchmarks
--
Szilárd
On Sat, Nov 6, 2010 at 12:59 AM, lin hen wrote:
> Yeah, I think my problem is the input, but I don't have the .mpd file, I am
> using the existing input which has no problem with
Hi,
If you have installed fftw3 in the standard location it whould work
out of the box. Otherwise, you have to set the LDFLAGS and CPPFLAGS to
the library and include location respectively.
However, there's one more thing I can think of: did you make sure that
you compiled fftw3 in single precisi
Hi Renato,
First of all, what you're seeing is pretty normal, especially that you
have a CPU that is crossing the border of insane :) Why is it normal?
The PME algorithms are just simply not very well not well suited for
current GPU architectures. With an ill-suited algorithm you won't be
able to
Hi,
> Does anyone have an idea about what time the Gmx 4.5.2 will be released?
Soon, if everything goes well in a matter of days.
> And in 4.5.2, would the modified tip5p.itp in charmm27 force field be the
> same as that in current git version?
The git branch release-4-5-patches is the branch
Dear Igor,
Your output look _very_ weird, it seems as if CMake internal
variable(s) were not initialized, which I have no clue how could have
happened - the build generator works just fine for me. The only thing
I can think of is that maybe your CMakeCache is corrupted.
Could you please rerun cma
Hi,
The beta versions are all outdated, could you please use the latest
source distribution (4.5.1) instead (or git from the
release-4-5-patches branch)?
The instructions are here:
http://www.gromacs.org/gpu#Compiling_and_custom_installation_of_GROMACS-GPU
>> The requested platform "CUDA" could n
I think this mail belongs to the user's list, CC-d will continue the
discussion there.
--
Szilárd
2010/10/5 Igor Leontyev :
> Dear gmx-developers,
> My first attempt to start GPU-version of gromacs has no success. The reason
> is that grompp turns off setting of electrostatics overriding them b
> If using Tcoupl and Pcoupl = no and then I can compare mdrun x mdrun-gpu,
> being my gpu ~2 times slower than only one core. Well, I definitely don't
> intended to use mdrun-gpu but I am surprised that it performed that bad (OK,
> I am using a low-end GPU, but sander_openmm seems to work fine and
Hi David,
Are you sure you're using gcc 3.4? Because if you are, I'd strongly
suggest that you switch to 4.x!
Cheers,
--
Szilárd
On Tue, Sep 14, 2010 at 5:21 PM, David Parcej
wrote:
> Hi all.
> I have a problem building the double (but not single) precision version of
> gromacs 4.5 on an AMD
Hi Alan,
I assume this is still the same issue on the same issue (same
machine/os) as you reported last time.
Could you provide some details about the version of OS, compiler,
CUDA, OpenMM you're using?
I'll look into the problem and get back to you if I figure out something.
Cheers,
--
Szilárd
Hi,
> But when use "mdrun -h", the -nt does not exist.
> So can the -nt option be used in mdrun?
Just checked and If you have threads turned on (!) when building
gromacs then on the help page (mdrun -h) -nt does show up! Otherwise,
it's easy to check if you have thread-enabled build or not: just
Hi,
Indeed, the custom cmake target "install-mdrun" was designed to only
install the mdrun binary and it does not install the libraries this is
linked against when BUILD_SHARED_LIBS=ON.
I'm not completely sure that this is actually a bug, but to me it
smells like one. I'll file a bug report and w
Hi,
FYI, now building the GPU accelerated version in a non-clean build
tree (used to build the CPU version) should work as well as the other
way around.
_However_, be warned that in the latter case the CPU build-related
parameters do _not_ get reset to their default values (e.g.
GMX_ACCELERATION
Hi Mark,
I've just tried the link you mentioned and it seems to work. Could you
try again?
Cheers,
--
Szilárd
On Tue, Aug 31, 2010 at 9:53 AM, Mark Cheeseman
wrote:
> Hello,
>
> I am trying to download Version 4.0.5 but the FTP server keeps timing out.
> Is there a problem?
>
> Thanks,
> Mark
Hi,
The message is quite obvious about what happened: mdrun received a
TERM signal and therefor it stopped (see: man 7 signal or
http://linux.die.net/man/7/signal).
Figuring out the reason who and why sent a TERM signal to your mdrun
will be your task, but I can think of 2 basic scenarios: (I) yo
Hi,
Could you provide the compiler versions you used? I really hope it's
not gcc 4.1.x again...
Cheers,
--
Szilárd
On Thu, Aug 5, 2010 at 8:26 PM, Elio Cino wrote:
>
> Since the charmm force field has some instances with large charge groups
> (grompp warns you for it) it is advisable to use a
Hi Chris,
Though I'm repeating myself, for the sake of not leaving this post
unanaswered (btw reposts should be avoided as much as possible!):
First of all, as Rossen said, the <=2.6.4 is a typo, it was meant to
be >=2.6.4, it _should_ work with 2.8.0 (I took the FindCUDA.cmake
script from the 2.
201 - 300 of 302 matches
Mail list logo