On Fri, 6 Jan 2006, Graham E Fagg wrote:
> > Looks like the problem is somewhere in the tuned collectives?
> > Unfortunately I need a logfile with exactly those :(
> >
> > Carsten
>
> I hope not. Carsten can you send me your configure line (not the whole
> log) and any other things your set in
On Jan 6, 2006, at 8:13 AM, Carsten Kutzner wrote:
Looks like the problem is somewhere in the tuned collectives?
Unfortunately I need a logfile with exactly those :(
FWIW, we just activated these tuned collectives on the trunk (which
will eventually become the 1.1.x series; the tuned
On Jan 4, 2006, at 2:08 PM, Anthony Chan wrote:
Either my program quits without writing the logfile (and without
complaining) or it crashes in MPI_Finalize. I get the message
"33 additional processes aborted (not shown)".
This is not MPE error message. If the logging crashes in
On Wed, 4 Jan 2006, Carsten Kutzner wrote:
> On Tue, 3 Jan 2006, Anthony Chan wrote:
>
> > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the
> > number of processes. Could you explain what difficulty or error
> > message you encountered when using >32 processes ?
>
>
Hi Graham,
here are the all-to-all test results with the modification to the decision
routine you suggested yesterday. Now the routine behaves nicely for 128
and 256 float messages on 128 CPUs! For the other sizes one probably wants
to keep the original algorithm, since it is faster there.
On Tue, 3 Jan 2006, Carsten Kutzner wrote:
> On Tue, 3 Jan 2006, Graham E Fagg wrote:
>
> > Do you have any tools such as Vampir (or its Intel equivalent) available
> > to get a time line graph ? (even jumpshot of one of the bad cases such as
> > the 128/32 for 256 floats below would help).
>
>
Hi Carsten
I have also tried the tuned alltoalls and they are really great!! Only for
very few message sizes in the case of 4 CPUs on a node one of my alltoalls
performed better. Are these tuned collectives ready to be used for
production runs?
We are actively testing them on larger systems
On Tue, 20 Dec 2005, George Bosilca wrote:
> On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote:
>
> >> I don't see how you deduct that adding barriers increase the
> >> congestion ? It increase the latency for the all-to-all but for me
> >
> > When I do an all-to-all a lot of times, I see that
On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote:
I don't see how you deduct that adding barriers increase the
congestion ? It increase the latency for the all-to-all but for me
When I do an all-to-all a lot of times, I see that the time for a
single
all-to-all varies a lot. My time