can you please give the attached patch a try ?
in my environment, it fixes your test case.
based on previous tests posted here, it is likely a similar bug should
be fixed for other filesystems.
On 6/15/2016 12:42 AM, Nicolas Joly wrote:
At work, i do
As of the latest master, I am getting this with e
On Wed, Jun 22, 2016 at 1:06 AM, Jeff Squyres (jsquyres) wrote:
> Abhishek --
> Could you send the full output from your mtt client run with the --verbose
> flag enabled?
> If you'd prefer not to send it to the
Could you send the full output from your mtt client run with the --verbose flag
If you'd prefer not to send it to the public list, send it directly to me and
Josh Hursey (IBM).
> On Jun 21, 2016, at 6:48 AM, Jeff Squyres (jsquyres)
> On Jun 20, 2016, at 4:15 PM, Audet, Martin
> But now since we have to live with memory registration issues, what changes
> should be done to standard Linux distro so that Open MPI can best use a
> recent Mellanox Infiniband network ?
> I guess
On Jun 20, 2016, at 4:27 PM, Alex A. Granovsky wrote:
> Would be the use of mlockall helpful for this approach?
That's an interesting idea; I didn't know about the existence of
It has a few drawbacks, of course (e.g., processes can't shrink),
You sent me your INI file in another email thread.
In general, you need to run all the 5 phases. During the MPI install, for
example, even if you have an "already installed" MPI (i.e., using the MPI Get
module "AlreadyInstalled"), you still have to run that phase so that