.ch>
> > Sent by: users-boun...@open-mpi.org
> > 10/29/2008 12:36 PM
> > Please respond to
> > Open MPI Users <us...@open-mpi.org>
> >
> > To
> >
> > Open MPI Users <us...@open-mpi.org>
> >
> > cc
> >
> >
>
l.cern.ch>
Sent by: users-boun...@open-mpi.org
10/29/2008 12:36 PM
Please respond to
Open MPI Users <us...@open-mpi.org>
To
Open MPI Users <us...@open-mpi.org>
cc
Subject
Re: [OMPI users] Working with a CellBlade cluster
Thank you very much Mi and Lenny for your detailed r
Subject
Re: [OMPI users] Working with a
CellBlade cluster
10/29/2008 12:36
Thank you very much Mi and Lenny for your detailed replies.
I believe I can summarize the infos to allow for
'Working with a QS22 CellBlade cluster' like this:
- Yes, messages are efficiently handled with "-mca btl openib,sm,self"
- Better to go to the OMPI-1.3 version ASAP
- It is currently
10/23/2008 01:52 cc
PM
Subject
Please respond to
Open MPI Users
Sent by: users-boun...@open-mpi.org
>
> 10/23/2008 01:52 PM Please respond to
> Open MPI Users <us...@open-mpi.org>
>
>
> To
>
> "Open MPI Users" <us...@open-mpi.org>
> cc
>
>
> Subject
>
> Re:
Subject
Re: [OMPI users]
;
> lenny.verkhov...@gmail.com>
>
>
>
> *"Lenny Verkhovsky" <lenny.verkhov...@gmail.com>*
> Sent by: users-boun...@open-mpi.org
>
> 10/23/2008 05:48 AM Please respond to
> Open MPI Users <us...@open-mpi.o
Hi, Lenny,
So rank file map will be supported in OpenMPI 1.3?I'm using
OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
Do you have idea when OpenMPI 1.3 will be available?OpenMPI 1.3
has quite a few features I'm looking for.
Thanks,
Mi
1. MCA BTL parameters
With "-mca btl openib,self", both message between two Cell processors on
one QS22 and messages between two QS22s go through IB.
With "-mca btl openib,sm,slef", message on one QS22 go through shared
memory, message between QS22 go through IB,
Depending on the message
Hi,
If I understand you correctly the most suitable way to do it is by paffinity
that we have in Open MPI 1.3 and the trank.
how ever usually OS is distributing processes evenly between sockets by it
self.
There still no formal FAQ due to a multiple reasons but you can read how to
use it in the
Working with a CellBlade cluster (QS22), the requirement is to have one
instance of the executable running on each socket of the blade (there are 2
sockets). The application is of the 'domain decomposition' type, and each
instance is required to often send/receive data with both the remote blades
12 matches
Mail list logo