Creating nightly hwloc snapshot SVN tarball was a success.
Snapshot: hwloc 1.5a1r4417
Start time: Wed Mar 21 21:01:01 EDT 2012
End time: Wed Mar 21 21:04:16 EDT 2012
Your friendly daemon,
Cyrador
This is fixed by setting
HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM
One of the topology pruning functions disabled by this flag
must have been causing the problem.
Again, this only happens when MPI calls are made alongside hwloc calls.
On Wed, Mar 21, 2012 at 10:17 AM, Daniel Ibanez
I see your point about setting MPI_ERR_PENDING on the internal status
versus the status returned by MPI_Waitall. As I mentioned, the reason
I choose to do that is to support the ompi_errhandler_request_invoke()
function. I could not think of an better way to fix this, so I'm open
to ideas.
My point was that MPI_ERR_PENDING should never be set on a specific request.
MPI_ERR_PENDING should only be returned in the array of statuses attached to
MPI_Waitall. Thus, there is no need to remove it from any request.
In addition, there is another reason why this is unnecessary (and I was
In the patch for errhandler_invoke.c, you can see that we need to
check for MPI_ERR_PENDING to make sure that we do not free the request
when we are trying to decide if we should invoke the error handler. So
setting the internal req->req_status.MPI_ERROR to MPI_ERR_PENDING made
it possible to
Josh,
I don't agree that these changes are required. In the current standard (2.2),
MPI_ERR_PENDING is only allowed to be returned by MPI_WAITALL, in some very
specific conditions. Here is the snippet from the MPI standard clarifying this
behavior.
> When one or more of the communications
What: Change coll tuned default to pairwise exchange
Why: The linear algorithm does not scale to any reasonable number of PEs
When: Timeout in 2 days (Fri)
Is there any reason the default should not be changed?
-Nathan
HPC-3, LANL
Sorry the below cc line if for Solaris Studio compilers if you have gcc
replace "-G" with "-shared".
thanks,
--td
On 3/21/2012 11:32 AM, TERRY DONTJE wrote:
I ran into a problem on a Suse 10.1 system and was wondering if anyone
has a version of Suse newer than 10.1 that can try the following
I ran into a problem on a Suse 10.1 system and was wondering if anyone
has a version of Suse newer than 10.1 that can try the following test
and send me the results.
-testpci
cat
Chris: the compute nodes run CNK
On Wed, Mar 21, 2012 at 10:08 AM, Daniel Ibanez wrote:
> Attached is the stderr and stdout from lstopo compiled as you said.
> I can't run hwloc-gather-topology.sh on the compute nodes
> since its a script, but I can run it on the front
Attached is the stderr and stdout from lstopo compiled as you said.
I can't run hwloc-gather-topology.sh on the compute nodes
since its a script, but I can run it on the front end node.
On Wed, Mar 21, 2012 at 3:36 AM, Samuel Thibault
wrote:
> Daniel Ibanez, le Wed 21
Hello,
I have a problem using Open MPI on my linux system (pandaboard running
Ubuntu precise). A call to MPI_Init_thread with the following parameters
hangs:
MPI_Init_thread(0, 0, MPI_THREAD_MULTIPLE, );
it seems that we are stuck on this loop in function
opal_condition_wait():
while
Daniel Ibanez, le Wed 21 Mar 2012 03:37:25 +0100, a écrit :
> Please let me know if theres a hint of what could be causing it,
> where to post, and what info to provide.
This is already the proper list.
Please attach the output of lstopo after having given the --enable-debug
option to
13 matches
Mail list logo