I've looked in more detail at the current two MPI_Alltoallv algorithms
and wanted to raise a couple of ideas.
Firstly, the new default "pairwise" algorithm.
* There is no optimisation for sparse/empty messages, compare to the old
basic "linear" algorithm.
* The attached "pairwise-nop" patch
aunch by supplying appropriate MCA parameters to orterun (a.k.a.
>>>> mpirun and mpiexec).
>>>>
>>>> There is also a largely undocumented feature of the "tuned" collective
>>>> component where a dynamic rules file can be supplied. In the file a
ay, December 19, 2012 5:31 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to
1.6.1
On 19/12/12 11:08, Paul Kapinos wrote:
Did you *really* wanna to dig into code just in order to switch a
default communication algorithm?
No, I didn't want to, but wit
en University, Center for Computing and Communication
>> Rechen- und Kommunikationszentrum der RWTH Aachen
>> Seffenter Weg 23, D 52074 Aachen (Germany)
>>
>>> -Original Message-
>>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
>
23, D 52074 Aachen (Germany)
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Wednesday, December 19, 2012 5:31 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to
1.6.1
On 19/1
users-boun...@open-mpi.org]
> On Behalf Of Number Cruncher
> Sent: Wednesday, December 19, 2012 5:31 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to
> 1.6.1
>
> On 19/12/12 11:08, Paul Kapinos wrote:
> > Did you *really* wanna t
g 23, D 52074 Aachen (Germany)
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Thursday, November 15, 2012 5:37 PM
To: Open MPI Users
Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to
1.6.1
pi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Thursday, November 15, 2012 5:37 PM
To: Open MPI Users
Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of
version 1
ber 15, 2012 5:37 PM
To: Open MPI Users
Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of
version 1.6.1.
* This is most noticeable for high-frequency exchanges over 1Gb ethernet
where process-to-p
nt: Thursday, November 15, 2012 5:37 PM
> To: Open MPI Users
> Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
>
> I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of
> version 1.6.1.
> * This is most noticeable for high-frequen
I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of version 1.6.1.
* This is most noticeable for high-frequency exchanges over 1Gb ethernet
where process-to-process message sizes are fairly small (e.g. 100kbyte)
and much of the exchange matrix is sparse.
* 1.6.1
11 matches
Mail list logo