On Wed, Jun 9, 2010 at 7:58 AM, Jeff Squyres wrote:
> On Jun 8, 2010, at 12:33 PM, David Turner wrote:
>
> > Please verify: if using openib BTL, the only threading model is
> MPI_THREAD_SINGLE?
>
> Up to MPI_THREAD_SERIALIZED.
>
> > Is there a timeline for full support of MPI_THREAD_MULTIPLE in
On Jun 8, 2010, at 12:33 PM, David Turner wrote:
> Please verify: if using openib BTL, the only threading model is
> MPI_THREAD_SINGLE?
Up to MPI_THREAD_SERIALIZED.
> Is there a timeline for full support of MPI_THREAD_MULTIPLE in Open MPI's
> openib BTL?
IBM has been making some good strides
Hi all,
Please verify: if using openib BTL, the only threading model
is MPI_THREAD_SINGLE?
Is there a timeline for full support of MPI_THREAD_MULTIPLE
in Open MPI's openib BTL?
Thanks!
--
Best regards,
David Turner
User Services Groupemail: dptur...@lbl.gov
NERSC Division
I once had a crash in libpthread something like the one below. The
very un-obvious cause was a stack overflow on subroutine entry - large
automatic array.
HTH,
Douglas.
On Wed, Mar 04, 2009 at 03:04:20PM -0500, Jeff Squyres wrote:
> On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote:
>
> >I am u
On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote:
I am using intel lc_prof-11 (and its own mkl) and have built
openmpi-1.3.1 with connfigure options: "FC=ifort F77=ifort CC=icc
CXX=icpc". Then I have built my application.
The linux box is 2Xamd64 quad. In the middle of running of my
applic
Dear All,
I am using intel lc_prof-11 (and its own mkl) and have built openmpi-1.3.1
with connfigure options: "FC=ifort F77=ifort CC=icc CXX=icpc". Then I have
built my application.
The linux box is 2Xamd64 quad. In the middle of running of my application
(after some 15 iterations), I receive the
Dear All,
I am using intel lc_prof-11 (and its own mkl) and have built openmpi-1.3.1
with connfigure options: "FC=ifort F77=ifort CC=icc CXX=icpc". Then I have
built my application.
The linux box is 2Xamd64 quad. In the middle of running of my application
(after some 15 iterations), I receive the
Dear All,
I have installed openmpi-1.3.1 (with the defaults), and built my
application.
The linux box is 2Xamd64 quad. In the middle of running of my application, I
receive the message and stops.
I tried to configure openmpi using "--disable-mpi-threads" but it
automatically assumes "posix".
This
Open MPI currently has minimal use of hidden "progress" threads, but
we will likely be experimenting with more usage of them over time
(previous MPI implementations have shown that progress threads can be
a big performance win for large messages, although they do tend to
add a bit of latenc
I have used POSIX threading and Open MPI without problems on our Opteron
2216 Cluster (4 cores per node). Moving to core-level parallelization
with multi threading resulted in significant performance gains.
Sam Adams wrote:
I have been looking, but I haven't really found a good answer about
sy
I have been looking, but I haven't really found a good answer about
system level threading. We are about to get a new cluster of
dual-processor quad-core nodes or 8 cores per node. Traditionally I
would just tell MPI to launch two processes per dual processor single
core node, but with eight cor
11 matches
Mail list logo