-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 21/03/12 08:07, Brice Goglin wrote:
> New patch attached, it doesn't add port numbers for non-IB
> devices.
Extract from lstopo on SGI XE270 box with Mellanox dual port IB card:
PCIBridge
PCI 15b3:673c
Net L#2 "ib1"
Net
From the same machine that Dan is using:
{hargrove@cetuslac1 ~}$ mpicc -v
mpicc for MPICH2 version 1.4.1p1
[...hairy details omitted...]
gcc version 4.4.6 (BGQ-dev-120305)
-Paul
On 3/22/2012 7:43 PM, Christopher Samuel wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 22/03/12 20:58,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 22/03/12 20:58, Brice Goglin wrote:
> So there's something strange going on when MPI is added. Which MPI
> are using? Is this a derivative of MPICH that embeds hwloc? (MPICH
> >= 1.2.1 if I remember correctly)
Not sure about BG/Q, but BG/P uses
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 22/03/12 01:08, Daniel Ibanez wrote:
> Attached is the stderr and stdout from lstopo compiled as you
> said.
Interesting, so it's not correctly detecting the topology as BG/Q is
16 compute cores, each with 4 hardware threads. Instead it's
Creating nightly hwloc snapshot SVN tarball was a success.
Snapshot: hwloc 1.4.2a1r4420
Start time: Thu Mar 22 21:04:20 EDT 2012
End time: Thu Mar 22 21:07:07 EDT 2012
Your friendly daemon,
Cyrador
Creating nightly hwloc snapshot SVN tarball was a success.
Snapshot: hwloc 1.5a1r4418
Start time: Thu Mar 22 21:01:01 EDT 2012
End time: Thu Mar 22 21:04:20 EDT 2012
Your friendly daemon,
Cyrador
Thanks!
On Mar 22, 2012, at 6:12 PM, Jeffrey Squyres wrote:
>> From the context of the code, I'm assuming it's supposed to be MPI_SOURCE.
>> I'll commit shortly.
>
>
> On Mar 22, 2012, at 7:54 PM, Ralph Castain wrote:
>
>> Yo Brian
>>
>> I believe you have an error in this commit:
>>
>>
I was reading the FAQs for the ClamAV anti-virus program (included on
Mac OS X) at http://www.clamav.net/lang/en/faq/faq-upgrade/. At the
end is a note that caught my eye about problem compilers.
ClamAV supports a wide variety of compilers, hardware and operating
systems. Our core
>From the context of the code, I'm assuming it's supposed to be MPI_SOURCE.
>I'll commit shortly.
On Mar 22, 2012, at 7:54 PM, Ralph Castain wrote:
> Yo Brian
>
> I believe you have an error in this commit:
>
> pml_ob1_iprobe.c:113: error: 'ompi_status_public_t' has no member named
>
Yo Brian
I believe you have an error in this commit:
pml_ob1_iprobe.c:113: error: 'ompi_status_public_t' has no member named
'MPI_STATUS'
I checked the definition of that struct, and the error is correct - there is no
such member. What should it be?
Ralph
On Mar 22, 2012, at 4:55 PM,
Le 22/03/2012 23:33, Daniel Ibanez a écrit :
> I've run this test before (didnt keep the results but can run it again).
> I got debug output and compared it with the output from a hwloc test
> executable
> and I noticed that my program did not show any PU objects were discovered.
> In my program
I've run this test before (didnt keep the results but can run it again).
I got debug output and compared it with the output from a hwloc test
executable
and I noticed that my program did not show any PU objects were discovered.
In my program the first discovered topology is just a Machine object,
Le 22/03/2012 23:07, Daniel Ibanez a écrit :
>
> I suspected this might be the reason, so I called "nm"
> with the static versions of the libraries their compiler wrappers
> link against and I could not find the term "hwloc" in the output.
> Is this a valid test?
If your hwloc is still compiled
Daniel Ibanez, le Thu 22 Mar 2012 23:07:01 +0100, a écrit :
> I suspected this might be the reason, so I called "nm"
> with the static versions of the libraries their compiler wrappers
> link against and I could not find the term "hwloc" in the output.
> Is this a valid test?
Ah, right, embedded
On Thu, 22 Mar 2012, Shamis, Pavel wrote:
What: Change coll tuned default to pairwise exchange
Why: The linear algorithm does not scale to any reasonable number of PEs
When: Timeout in 2 days (Fri)
Is there any reason the default should not be changed?
Nathan,
I can see why people
>
>> What: Change coll tuned default to pairwise exchange
>>
>> Why: The linear algorithm does not scale to any reasonable number of PEs
>>
>> When: Timeout in 2 days (Fri)
>>
>> Is there any reason the default should not be changed?
>
> Nathan,
>
> I can see why people think the linear
On Mar 21, 2012, at 12:14 , Nathan Hjelm wrote:
> What: Change coll tuned default to pairwise exchange
>
> Why: The linear algorithm does not scale to any reasonable number of PEs
>
> When: Timeout in 2 days (Fri)
>
> Is there any reason the default should not be changed?
Nathan,
I can see
We did not support ARM until Open MPI 1.5.x.
On Mar 21, 2012, at 7:07 AM, Juan Solano wrote:
>
> Hello,
>
> I have a problem using Open MPI on my linux system (pandaboard running
> Ubuntu precise). A call to MPI_Init_thread with the following parameters
> hangs:
>
> MPI_Init_thread(0, 0,
Thanks Josh.
george.
On Mar 22, 2012, at 10:09 , Josh Hursey wrote:
> Should be fixed in r26177.
>
> On Thu, Mar 22, 2012 at 7:51 AM, Josh Hursey wrote:
>> Fair enough. Upon further inspection of the request_invoke() handler,
>> you are correct that it is not required
Should be fixed in r26177.
On Thu, Mar 22, 2012 at 7:51 AM, Josh Hursey wrote:
> Fair enough. Upon further inspection of the request_invoke() handler,
> you are correct that it is not required here if we do not modify the
> default value for req_status.MPI_ERROR.
>
> I'll
Fair enough. Upon further inspection of the request_invoke() handler,
you are correct that it is not required here if we do not modify the
default value for req_status.MPI_ERROR.
I'll work on a revised patch this morning and commit. One that does
not use this field.
Per your comment from your
Brice Goglin, le Thu 22 Mar 2012 10:58:46 +0100, a écrit :
> I don't see anything bad in your outputs.
> So there's something strange going on when MPI is added. Which MPI are using?
> Is this a derivative of MPICH that embeds hwloc? (MPICH >= 1.2.1 if I remember
> correctly)
There might be
I don't see anything bad in your outputs.
So there's something strange going on when MPI is added. Which MPI are
using? Is this a derivative of MPICH that embeds hwloc? (MPICH >= 1.2.1
if I remember correctly)
Brice
Le 21/03/2012 15:08, Daniel Ibanez a écrit :
> Attached is the stderr and
On Mar 21, 2012, at 15:13 , Josh Hursey wrote:
> I see your point about setting MPI_ERR_PENDING on the internal status
> versus the status returned by MPI_Waitall. As I mentioned, the reason
> I choose to do that is to support the ompi_errhandler_request_invoke()
> function. I could not think of
24 matches
Mail list logo