Hello all
[I’m sharing this on the OMPI mailing lists (as well as the PMIx one) as PMIx
has become tightly integrated to the OMPI code since v2.0 was released]
The PMIx Community will once again be hosting a Birds-of-a-Feather meeting at
SuperComputing. This year, however, will be a little diff
Dave,
Thank you for your detailed report and testing, that is indeed very helpful. We
will definitely have to do something.
Here is what I think would be potentially doable.
a) if we detect a Lustre file system without flock support, we can printout an
error message. Completely disabling MPI I/O
On Mon, 2018-10-15 at 12:21 +0100, Dave Love wrote:
> For what it's worth, I found the following from running ROMIO's tests
> with OMPIO on Lustre mounted without flock (or localflock). I used
> 48
> processes on two nodes with Lustre for tests which don't require a
> specific number.
>
> OMPIO f
For what it's worth, I found the following from running ROMIO's tests
with OMPIO on Lustre mounted without flock (or localflock). I used 48
processes on two nodes with Lustre for tests which don't require a
specific number.
OMPIO fails tests atomicity, misc, and error on ext4; it additionally
fai