Here is my code:
int*a=(int*)malloc(sizeof(int)*number);
MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);
int*b=(int*)malloc(sizeof(int)*number);
MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
number here is the size of my array(eg,a or b).
I have try it on my local compute and my rocks cluster.On rocks cluster, one
processor on one frontend node use "MPI_Send" send a message ,other
processors on compute nodes use "MPI_Recv" receive message .
when number is least than 10000,other processors can receive message fast;
but when number is more than 15000,other processors can receive message slowly
why?? becesue openmpi API ?? or other problems?
it spends me a few days , I want your help,thanks for all readers. good luck
for you
------------------ ???????? ------------------
??????: "Ralph Castain";<[email protected]>;
????????: 2013??12??5??(??????) ????6:52
??????: "Open MPI Users"<[email protected]>;
????: Re: [OMPI users] can you help me please ?thanks
You are running 15000 ranks on two nodes?? My best guess is that you are
swapping like crazy as your memory footprint problem exceeds available physical
memory.
On Thu, Dec 5, 2013 at 1:04 AM, ???? <[email protected]> wrote:
My ROCKS cluster includes one frontend and two compute nodes.In my program,I
have use the openmpi API such as MPI_Send and MPI_Recv . but when I run
the progam with 3 processors . one processor send a message ,other receive
message .here are some code.
int*a=(int*)malloc(sizeof(int)*number);
MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);
int*b=(int*)malloc(sizeof(int)*number);
MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
when number is least than 10000,it runs fast.
but number is more than 15000,it runs slowly
why?? becesue openmpi API ?? or other problems?
------------------ ???????? ------------------
??????: "Ralph Castain";<[email protected]>;
????????: 2013??12??3??(??????) ????1:39
??????: "Open MPI Users"<[email protected]>;
????: Re: [OMPI users] can you help me please ?thanks
On Mon, Dec 2, 2013 at 9:23 PM, ???? <[email protected]> wrote:
A simple program at my 4-node ROCKS cluster runs fine with command:
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
Another bigger programs runs fine on the head node only with command:
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
But with the command:
cd /sphere; /opt/openmpi/bin/mpirun -np 4 -machinefile ../machines
../bin/sort_mpi6
It gives output that:
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot
open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot
open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot
open
shared object file: No such file or directory
_______________________________________________
users mailing list
[email protected]
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
[email protected]
http://www.open-mpi.org/mailman/listinfo.cgi/users