For those of you in this situation, you can apply the attached patch
to your OMPI 1.3.1 source code and rebuild it - has been tested by the
original reporter and it solved this particular problem.
Ralph
changeset_r20856.diff
Description: Binary data
On Mar 24, 2009, at 5:44 PM, Divya N
Hi,
Ran into the same problem yesterday and was wondering what was wrong.
Couldn't get that fixed even after I read the users archives. Thanks for this
response. So, there's no way as of now to get openmpi running on a
laptop with ubuntu?
Thanks,
Divya
On Tue, Mar 24, 2009 at 7:33 PM, Ralph Cast
Yeah, there is something funny about the way Ubuntu is defining their
Ethernet interfaces that is causing a problem. I have a patch that
will be in 1.3.2 that fixes the problem.
On Mar 24, 2009, at 5:24 PM, Simone Pellegrini wrote:
Hello everyone,
I have the same problem when I try to inst
Hello everyone,
I have the same problem when I try to install openmpi 1.3.1 on my laptop
(Ubuntu 8.10 running on a dual core machine).
I did the same installation on Ubuntu 8.04 and everything works, but
here no matter what I do, every time I type mpirun the system prompt for
the password.
Hi!
We're having problems with code that uses BLACS and openmpi 1.3.x.
When compiled with memory-manager turned on (base only), code using
BLACS either start leaking memory or gets into some kind of deadlock.
The first code-case can be circumvented by using
mpi_leave_pinned_pipeline 0, but the sec
Hello Again !
Manuel Prinz wrote:
Hi Jerome!
Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
With LAM some configuration files must be set up, I guess it is the same here.
But as SLURM is also involved, it is not clear to me right now how I must
configure both SLURM and OpenMPI
Hi Manuel !
I read what you said on the web before I sent my email.
But it does not work with my sample. It is an old LAM source C.
Anyway, thanks a lot for your reply.
Jerome
Manuel Prinz wrote:
Hi Jerome!
Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
With LAM some configu
Ashley Pittman wrote:
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use
On 23 Mar 2009, at 23:36, Shaun Jackman wrote:
loop {
MPI_Ibsend (for every edge of every leaf node)
MPI_barrier
MPI_Iprobe/MPI_Recv (until no messages pending)
MPI_Allreduce (number of nodes removed)
} until (no nodes removed by any node)
Previously, I attempted to use a single MPI_Allreduce w
Hi Jerome!
Am Dienstag, den 24.03.2009, 16:27 +0800 schrieb Jerome BENOIT:
> With LAM some configuration files must be set up, I guess it is the same here.
> But as SLURM is also involved, it is not clear to me right now how I must
> configure both SLURM and OpenMPI to make them work together. Any
Hello List,
I have just installed OpenMPI on a Lenny cluster where SLURM works fine.
I can compile a sample C source of mine (I used LAM a few years ago),
but I can not run it on the cluster.
With LAM some configuration files must be set up, I guess it is the same here.
But as SLURM is also invo
11 matches
Mail list logo