On 12/5/07 7:58 AM, "rolf.vandeva...@sun.com"
wrote:
> Ralph H Castain wrote:
>
>> I. Support for non-MPI jobs
>> Considerable complexity currently exists in ORTE because of the stipulation
>> in our first requirements document that users be able to mpirun non-MPI
Well, I think it is pretty obvious that I am a fan of a attribute system :)
For completeness, I will point out that we also exchange architecture
and hostname info in the modex.
Do we really need a complete node map? A far as I can tell, it looks
like the MPI layer only needs a list of local
On 12/5/07 8:48 AM, "Tim Prins" wrote:
> Well, I think it is pretty obvious that I am a fan of a attribute system :)
>
> For completeness, I will point out that we also exchange architecture
> and hostname info in the modex.
True - except we should note that hostname
To me, (a) is dumb and (c) isn't a non-starter.
The whole point of the component system is to seperate concerns. Routing
topology and collectives operations are two difference concerns. While
there's some overlap (a topology-aware collective doesn't make sense when
using the unity routing
Hi...
Karol Mroz wrote:
> Removal of .ompi_ignore should not create build problems for anyone who
> is running without some form of SCTP support. To test this claim, we
> built Open MPI with .ompi_ignore removed and no SCTP support on both an
> ubuntu linux and an OSX machine. Both builds
Hello,
It appears that sometime after r16777, and by r16799, that something
was broken on the trunk's openib support for 32-bit builds.
The 64-bit tests all seem normal, as well as the 32-bit & 64-bit tests on
the 1.2 branch on the same machine (odin).
See this MTT results page permalink showing
I'm not sure I would call (a) "dumb", but I would agree it isn't a desirable
option. ;-)
The issue isn't with the current two routed components. The issue arose
because additional routed components are about to be committed to the
system. None of those added components are fully connected - i.e.,
Hi,
Last night we had one of our threaded builds on the trunk hang when
running make check on the test opal_condition in test/threads/
After running the test about 30-40 times, I was only able to get it to
hang once. Looking at it is gdb, we get:
(gdb) info threads
3 Thread 1084229984
There is a double call to ompi_btl_openib_connect_base_open in
mca_btl_openib_mca_setup_qps(). It looks like someone just forgot to
clean-up the previous call when they added the check for the return
code.
I ran a quick IMB test over IB to verify everything is still working.
Thanks,
Jon
One question there is a mention a new pml that is essentially CM+matching.
Why is this no just another instance of CM ?
Rich
On 11/26/07 7:54 PM, "Jeff Squyres" wrote:
> OMPI OF Pow Wow Notes
> 26 Nov 2007
>
>
10 matches
Mail list logo