Don't change it yet. There's still fairly furious debate about
exactly this issue for MPI-2.2. So I wouldn't say that the Forum has
agreed to it yet. :-)
On Nov 19, 2008, at 3:17 PM, Shiqing Fan wrote:
Hi Martin,
Thanks for the information. So it will be changed also for memchecker.
Hi Martin,
Thanks for the information. So it will be changed also for memchecker.
Regards,
Shiqing
The (non modifying) access to a send buffer was agreed for MPI Standard 2.2 not version 2.1 see the MPI 2.2 Wiki:
https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MpiTwoTwoWikiPage
htt
4) Well, this sounds reasonable, but according to the MPI-1 standard
(see page 40 for non-blocking send/recv, a more detailed explanation in
page 30):
"A nonblocking send call indicates that the system may start copying
data out of the send buffer. The sender shoul
Hi George,
4) Well, this sounds reasonable, but according to the MPI-1 standard
(see page 40 for non-blocking send/recv, a more detailed explanation in
page 30):
"A nonblocking send call indicates that the system may start copying
data out of the send buffer. The sender should */not access*/ a
On Nov 19, 2008, at 10:18 , François PELLEGRINI wrote:
4) Well, this sounds reasonable, but according to the MPI-1 standard
(see page 40 for non-blocking send/recv, a more detailed
explanation in
page 30):
"A nonblocking send call indicates that the system may start copying
data out of the
Bonjour Shiqing,
Shiqing Fan wrote:
> Dear François,
>
> Thanks a lot for your report, it's really a great help for us. :-)
No problem. Your software helps me too, so as soon as you have fixes
and new builds please tell me, so that I can try again.
> For the issues:
> 1) When you got "Condit
Dear François,
Thanks a lot for your report, it's really a great help for us. :-)
For the issues:
1) When you got "Conditional jump" errors, normally that means some
uninitialized(or undefined) values were used. The parameters that passed
into PMPI_Init_thread might contain uninitialized va
Hello all,
I am the main developer of the Scotch parallel graph partitioning
package, which uses both MPI and Posix Pthreads. I have been doing
a great deal of testing of my program on various platforms and
libraries, searching for potential bugs (there may still be some ;-) ).
The new memchec