Thanks Ralph! In future, I'll try and remember to follow up on these things
:)
Cheers,
Richard
On 10 April 2014 11:16, Ralph Castain wrote:
> Not really - it's the responsibility of the developer to file the CMR.
> Some folks are good about it, and some aren't. In this
Okay. Thanks for having a look Ralph!
For future reference, is there a better process I can go through if I find
bugs like this that makes sure they don't get forgotten?
Thanks,
Richard
On 10 April 2014 00:39, Ralph Castain wrote:
> Wow - that's an ancient one. I'll see if
Wow - that's an ancient one. I'll see if it can be applied to 1.8.1. These
things don't automatically go across - it requires that someone file a request
to move it - and I think this commit came into the trunk after we branched for
the 1.7 series.
On Apr 9, 2014, at 12:05 PM, Richard Shaw
I'm not sure I ever replied to this to say that the patch works perfectly
(very belatedly)!
However I just wanted to ask what the progress of getting this into a
released version is? I'm not particularly sure on the details on the
OpenMPI development process - I've noticed that it's still in the
Thanks George, I'm glad it wasn't just me being crazy. I'll try and test that
one soon.
Cheers,
Richard
On Tuesday, 24 July, 2012 at 6:28 PM, George Bosilca wrote:
> Richard,
>
> Thanks for identifying this issue and for the short example. I can confirm
> your original understanding was
On Jul 24, 2012, at 6:28 PM, George Bosilca wrote:
> Thanks for identifying this issue and for the short example. I can confirm
> your original understanding was right, the upper bound should be identical on
> all ranks. I just pushed a patch (r26862), let me know if this fixes your
> issue.
Richard,
Thanks for identifying this issue and for the short example. I can confirm your
original understanding was right, the upper bound should be identical on all
ranks. I just pushed a patch (r26862), let me know if this fixes your issue.
Thanks,
george.
On Jul 24, 2012, at 17:27 ,
I've been speaking off line to Jonathan Dursi about this problem. And it does
seems to be a bug.
The same problem crops up in a simplified 1d only case (test case attached). In
this instance the specification seems to be comprehensible - looking at the pdf
copy of MPI-2.2 spec, p92-93, the
Hello,
I'm getting thoroughly confused trying to work out what is the correct extent
of a block-cyclic distributed array type (created with MPI_Type_create_darray),
and I'm hoping someone can clarify it for me.
My expectation is that calling MPI_Get_extent on this type should return the
size