Hi Gilles,
Thanks for the quick response. I tried your suggestion of setting
the --enable-script-wrapper-compilers option to configure and ran
into the following bug (as I am on version 1.10.2):
https://www.mail-archive.com/users@lists.open-mpi.org/msg30166.html
I applied the patch found in you
Hello Joseph,
I'm still unable to reproduce this system on my SLES12 x86_64 node.
Are you building with CFLAGS=-O3?
If so, could you build without CFLAGS set and see if you still see the
failure?
Howard
2017-03-02 2:34 GMT-07:00 Joseph Schuchart :
> Hi Howard,
>
> Thanks for trying to reprod
done. I added 2.0.3 as the milestone, since I am not sure what the
timeline for 2.1.0 is. I will try to get the fix in over the weekened,
and we can go from there.
Tanks
Edga
On 3/3/2017 9:00 AM, Howard Pritchard wrote:
Hi Edgar
Please open an issue too so we can track the fix.
Howard
Hi Edgar
Please open an issue too so we can track the fix.
Howard
Edgar Gabriel schrieb am Fr. 3. März 2017 um
07:45:
> Nicolas,
>
> thank you for the bug report, I can confirm the behavior. I will work on
> a patch and will try to get that into the next release, should hopefully
> not be too
Nicolas,
thank you for the bug report, I can confirm the behavior. I will work on
a patch and will try to get that into the next release, should hopefully
not be too complicated.
Thanks
Edgar
On 3/3/2017 7:36 AM, Nicolas Joly wrote:
Hi,
We just got hit by a problem with sharedfp/lockedfi
Hi,
We just got hit by a problem with sharedfp/lockedfile component under
v2.0.1 (should be identical with v2.0.2). We had 2 instances of an MPI
program running conccurrently on the same input file and using
MPI_File_read_shared() function ...
If the shared file pointer is maintained with the lo
Hi,
On 03/03/17 12:41, Mark Dixon wrote:
Your 20% memory bandwidth performance hit on 2.x and the OPA problem are
concerning - will look at that. Are there tickets open for them?
OPA performance issue on CP2K (15x slowdown) :
https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html
On Fri, 3 Mar 2017, Paul Kapinos wrote:
...
Note that on 1.10.x series (even on 1.10.6), enabling of
MPI_THREAD_MULTIPLE in lead to (silent) shutdown of the InfiniBand
fabric for that application => SLOW!
2.x versions (tested: 2.0.1) handle MPI_THREAD_MULTIPLE on InfiniBand
the right way up,
Hi Mark,
On 02/18/17 09:14, Mark Dixon wrote:
On Fri, 17 Feb 2017, r...@open-mpi.org wrote:
Depends on the version, but if you are using something in the v2.x range, you
should be okay with just one installed version
How good is MPI_THREAD_MULTIPLE support these days and how far up the wi