Creating nightly hwloc snapshot git tarball was a success.
Snapshot: hwloc dev-520-g72f6fca
Start time: Tue Apr 21 21:01:02 EDT 2015
End time: Tue Apr 21 21:02:49 EDT 2015
Your friendly daemon,
Cyrador
On Tue, Apr 21, 2015 at 5:33 PM, Jeff Squyres (jsquyres) wrote:
> What happens with master tar balls?
>
Master is fine building dl:dlopen:
--- MCA component dl:dlopen (m4 configuration macro, priority 80)
checking for MCA component dl:dlopen compile mode... static
What happens with master tar balls?
Sent from my phone. No type good.
On Apr 21, 2015, at 7:38 PM, Paul Hargrove
> wrote:
Sorry the output in the previous email left out some relevant detail.
See here that BOTH dl components were unable to compile
Sorry the output in the previous email left out some relevant detail.
See here that BOTH dl components were unable to compile with the 1.8.5rc2
tarball:
+++ Configuring MCA framework dl
checking for no configure components in framework dl...
checking for m4 configure components in framework dl...
Is the following configure-fails-by-default behavior really the desired one
in 1.8.5?
I thought this was more of a 1.9 change than a mid-series change.
-Paul
--- MCA component dl:libltdl (m4 configuration macro, priority 50)
checking for MCA component dl:libltdl compile mode... static
checking
In the usual location:
http://www.open-mpi.org/software/ompi/v1.8/
The NEWS changed completely between rc1 and r2, so I don't know easily exactly
what is different between rc1 and rc2. Here's the full 1.8.5 NEWS:
- Fixed configure problems in some cases when using an external hwloc
I agree.
-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Jeff Squyres
(jsquyres)
Sent: Tuesday, April 21, 2015 7:17 AM
To: Open MPI Developers List
Subject: Re: [OMPI devel] binding output error
+1
Devendar, you seem to be reporting a different issue
Thanks, Jeff. I think Devendar and me are observing the same issue. We're
talking about the same cluster. And I agree with Ralph it must be just a
print out error since latency test shows that actual binding seems to be
correct.
Best regards,
Elena
On Tue, Apr 21, 2015 at 6:17 PM, Jeff Squyres
+1
Devendar, you seem to be reporting a different issue than Elena...? FWIW: Open
MPI has always used logical CPU numbering. As far as I can tell from your
output, it looks like Open MPI did the Right Thing with your examples.
Elena's example seemed to show conflicting cpu numbering -- where