Justin Lemkul wrote
> My first guess would be a buggy MPI implementation.  I can't comment on
> hardware 
> specs, but usually the random failures seen in mdrun_mpi are a result of
> some 
> generic MPI failure.  What MPI are you using?

I am using the OpenMPI package, version 1.4.3.  It's one of three MPI
implementations which are included in the standard repositories of Ubuntu
Linux 11.10.  I can also obtain MPICH2 and gromacs-mpich without jumping
through too many hoops.  It looks like LAM is also available.  However, if
GROMACS needs a special package to interface with LAM, it's not in the
repositories.

Alternately, I could drop using the external MPI for now and just use the
new multi-threaded GROMACS defaults.  I was trying to prepare for longer
runs on a cluster, however.  If those runs are going to crash, I had better
know about it now.



--
View this message in context: 
http://gromacs.5086.n6.nabble.com/Segmentation-fault-mdrun-mpi-tp5001601p5001776.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
-- 
gmx-users mailing list    [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [email protected].
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to