Sorry Brian and Jeff - I sent you chasing after something of a red herring...
After much more testing and banging my head on the desk trying to figure this
one out, it turns out '--mca mpi_yield_when_idle 1' on the command line does
actually work properly for me... The one or two times I had pr
No worries!
This is actually an intended feature -- it allows specific configuration
on a per-node basis (especially for heterogeneous situations, perhaps
not as heterogeneous as different architectures, but one can easily
imagine scenarios where different resources exist within the same
cluster,
> You make a good point about the values in that file, though -- I'll add
> some information to the FAQ that such config files are only valid on the
> nodes where they can be seen (i.e., that mpirun does not bundle up all
> these files and send them to remote nodes during mpirun). Sorry for the
>
>
> I also noticed another bug in the scheduler:
> hostfile:
> A slots=2 max-slots=2
> B slots=2 max-slots=2
> 'mpirun -np 5' quits with an over-subscription error
> 'mpirun -np 3 --host B' hangs and just chews up CPU cycles forever
>
Just as a quick followup on the 'hang' seen above. This was
Good suggestion; done.
Thanks!
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Donohue
> Sent: Monday, June 05, 2006 9:31 AM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] Oversubscription/Scheduling Bug
>
> > You m
On Jun 2, 2006, at 5:55 PM, Jonathan Day wrote:
Hi,
I'm working on developing some components for OpenMPI,
but am a little unclear as to how to implement
efficient sends and receives. I'm wanting to do
zero-copy two-sided MPI, but as far as I can see, this
is not going to be easy. As best as I
So far, every system I have compiled open-mpi on I have hit this same
non-obvious configure failure. In each case I have added
--with-openib= and --with-openib-libs=. configure runs
just fine till it starts looking for OpenIB and reports that it can't
find most of the header files and what not r
On 4/5/06, Jeff Squyres (jsquyres) wrote:
This is going to be influenced by how many processes bproc tells Open
MPI can be launched on each node.
Check out the FAQ for the -bynode and -byslot arguments to mpirun for
more details:
I have tried these arguments several times (up through 1.0.2a4)
Excellent suggestion; thanks.
Done!
> -Original Message-
> From: devel-boun...@open-mpi.org
> [mailto:devel-boun...@open-mpi.org] On Behalf Of Josh Aune
> Sent: Monday, June 05, 2006 4:55 PM
> To: Open MPI Developers
> Subject: [OMPI devel] Please add explicit test for sysfs/libsysfs.h
On Jun 5, 2006, at 2:58 PM, Josh Aune wrote:
On 4/5/06, Jeff Squyres (jsquyres) wrote:
This is going to be influenced by how many processes bproc tells Open
MPI can be launched on each node.
Check out the FAQ for the -bynode and -byslot arguments to mpirun for
more details:
I have tried the
10 matches
Mail list logo