h Castain";<r...@open-mpi.org>;
> : 2013??12??9??(??) 11:18
> ??: "Open MPI Users"<us...@open-mpi.org>;
> ????: Re: [OMPI users]?? ?? can you help me please ?thanks
>
> Forgive me, but I have no idea what that output means. Why do you
s"<us...@open-mpi.org>;
: Re: [OMPI users]?? ?? can you help me please ?thanks
Forgive me, but I have no idea what that output means. Why do you think only 3
processors are being used?
On Dec 9, 2013, at 5:05 AM, <781578...@qq.com> wrote:
I have a server
.@dcc.ufmg.br>;
: 2013??12??6??(??) 11:14
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users]?? can you help me please ?thanks
Probably it was the changing from eager to rendezvous protocols as Jeff said.
If you don't know w
mpi.org>;
> ????????: 2013??12??5??(??????) 6:52
> ??: "Open MPI Users"<us...@open-mpi.org>;
> : Re: [OMPI users] can you help me please ?thanks
>
> You are running 15000 ranks on two nodes?? My best guess is that you are
> swapping like crazy as your
stain";<r...@open-mpi.org>;
: 2013??12??5??(??) 6:52
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users] can you help me please ?thanks
You are running 15000 ranks on two nodes?? My best guess is that you are
swapping like craz
On Dec 5, 2013, at 4:04 AM, <781578...@qq.com> wrote:
> My ROCKS cluster includes one frontend and two compute nodes.In my program,I
> have use the openmpi API such as MPI_Send and MPI_Recv . but when I run
> the progam with 3 processors . one processor send a message ,other
of the speed between MPI_Send and MPI_Recv??
-- --
??: "Ralph Castain";<r...@open-mpi.org>;
: 2013??12??5??(??) 6:52
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users] can you
?? or other problems?
> -- 原始邮件 --
> *发件人:* "Ralph Castain";<r...@open-mpi.org>;
> *发送时间:* 2013年12月3日(星期二) 中午1:39
> *收件人:* "Open MPI Users"<us...@open-mpi.org>;
> *主题:* Re: [OMPI users] can you help me please ?thanks
>
>
>
>
?? or other problems?
-- --
??: "Ralph Castain";<r...@open-mpi.org>;
: 2013??12??3??(??) 1:39
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users] can you help me please ?thanks
thanks ...
-- --
??: "Ralph Castain";<r...@open-mpi.org>;
: 2013??12??3??(??) 9:03
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users]?? can you help me please ?
: "Ralph Castain";<r...@open-mpi.org>;
: 2013??12??3??(??) 1:39
??: "Open MPI Users"<us...@open-mpi.org>;
????: Re: [OMPI users] can you help me please ?thanks
Check that your LD_LIBRARY_PATH is getting set properly on your remote n
Check that your LD_LIBRARY_PATH is getting set properly on your remote node
- it likely is missing the path to this libgdal. You might need to add the
path to your default shell profile (e.g., .bashrc)
On Mon, Dec 2, 2013 at 9:23 PM, 胡杨 <781578...@qq.com> wrote:
> A simple program at my 4-node
A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
Another bigger programs runs fine on the head node only with command:
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
But with the command:
cd
13 matches
Mail list logo