Hi,

On 08/07/16 12:52 PM, Edgar Gabriel wrote:
The default MPI I/O library has changed in the 2.x release to OMPIO for

ok, I am now doing I/O on my own hard drive... but I can test over NFS easily. For Lustre, I will have to produce a reduced example out of our test suite...

most file systems. I can look into that problem, any chance to get
access to the testsuite that you mentioned?

Yikes! Sounds interesting, but difficult to realize... Our in-house code is not public... :/

I however proposed (to myself) to add a nightly compilation of openmpi (see http://www.open-mpi.org/community/lists/users/2016/06/29515.php) so I can report problems before releases are made...

Anyway, I will work on the little script to automate the MPI+PETSc+InHouseCode combination so I get give you a feedback as soon as you will propose me to test a patch...

Hoping this will be enough convenient for you...

Thanks!

Eric


Thanks
Edgar


On 7/8/2016 11:32 AM, Eric Chamberland wrote:
Hi,

I am testing for the first time the 2.X release candidate.

I have a segmentation violation using  MPI_File_write_all_end(MPI_File
fh, const void *buf, MPI_Status *status)

The "special" thing, may be that in the faulty test cases, there are
processes that haven't written anything, so they a a zero length buffer,
so the second parameter (buf) passed is a null pointer.

Until now, it was a valid call, has it changed?

Thanks,

Eric

FWIW: We are using our test suite (~2000 nightly tests) successfully
with openmpi-1.{6,8,10}.* and MPICH since many years...
_______________________________________________
devel mailing list
de...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel
Link to this post:
http://www.open-mpi.org/community/lists/devel/2016/07/19169.php

Reply via email to