Here is how you can do this without having to redescribe the data type all
the time.  This will also keep your data layout together and improve cache
coherency.


#include <stdlib.h>
#include <iostream>
#include <mpi.h>
using namespace std;
int main()
{
  int N=2, M=3;
  //Allocate the matrix
  double **A=(double**)malloc(sizeof(double*)*N);
  double *A_data=(double*)malloc(sizeof(double)*N*M);

  //assign some values to the matrix
  for(int i=0;i<N;i++)
    A[i]=&A_data[i*M];

  int j=0;
  for(int n=0;n<N;n++)
    for(int m=0;m<M;m++)
      A[n][m]=j++;

  //print the matrix
  cout << "Matrix:\n";
  for(int n=0;n<N;n++)
  {
    for(int m=0;m<M;m++)
    {
      cout << A[n][m] << " ";
    }
    cout << endl;
  }

  //to send over mpi
  //MPI_Send(A_data,M*N,MPI_DOUBLE,dest,tag,MPI_COMM_WORLD);

  //delete the matrix
  free(A);
  free(A_data);

  return 0;
}


On Sat, Oct 31, 2009 at 11:32 AM, George Bosilca <bosi...@eecs.utk.edu>wrote:

> Eugene is right, every time you create a new matrix you will have to
> describe it with a new datatype (even when using MPI_BOTTOM).
>
> george.
>
>
> On Oct 30, 2009, at 18:11 , Natarajan CS wrote:
>
>  Thanks for the replies guys! Definitely two suggestions worth trying.
>> Definitely didn't consider a derived datatype. I wasn't really sure that the
>> MPI_Send call overhead was significant enough that increasing the buffer
>> size and decreasing the number of calls would cause any speed up. Will
>> change the code over the weekend and see what happens! Also, maybe if one
>> passes the absolute address maybe there is no need for creating multiple
>> definitions of the datatype? Haven't gone through the man pages yet, so
>> apologies for ignorance!
>>
>> On Fri, Oct 30, 2009 at 2:44 PM, Eugene Loh <eugene....@sun.com> wrote:
>> Wouldn't you need to create a different datatype for each matrix instance?
>>  E.g., let's say you create twelve 5x5 matrices.  Wouldn't you need twelve
>> different derived datatypes?  I would think so because each time you create
>> a matrix, the footprint of that matrix in memory will depend on the whims of
>> malloc().
>>
>> George Bosilca wrote:
>>
>> Even with the original way to create the matrices, one can use
>>  MPI_Create_type_struct to create an MPI datatype (
>> http://web.mit.edu/course/13/13.715/OldFiles/build/mpich2-1.0.6p1/www/www3/MPI_Type_create_struct.html
>>  )
>> using MPI_BOTTOM as the original displacement.
>>
>> On Oct 29, 2009, at 15:31 , Justin Luitjens wrote:
>>
>> Why not do something like this:
>>
>> double **A=new double*[N];
>> double *A_data new double [N*N];
>>
>> for(int i=0;i<N;i++)
>> A[i]=&A_data[i*N];
>>
>> This way you have contiguous data (in A_data) but can access it as a  2D
>> array using A[i][j].
>>
>> (I haven't compiled this but I know we represent our matrices this  way).
>>
>> On Thu, Oct 29, 2009 at 12:30 PM, Natarajan CS <csnata...@gmail.com>
>>  wrote:
>> Hi
>> thanks for the quick response. Yes, that is what I meant. I  thought there
>> was no other way around what I am doing but It is  always good to ask a
>> expert rather than assume!
>>
>> Cheers,
>>
>> C.S.N
>>
>>
>> On Thu, Oct 29, 2009 at 11:25 AM, Eugene Loh <eugene....@sun.com>  wrote:
>> Natarajan CS wrote:
>>
>> Hello all,
>>    Firstly, My apologies for a duplicate post in LAM/MPI list I  have the
>> following simple MPI code. I was wondering if there was a  workaround for
>> sending a dynamically allocated 2-D matrix? Currently  I can send the matrix
>> row-by-row, however, since rows are not  contiguous I cannot send the entire
>> matrix at once. I realize one  option is to change the malloc to act as one
>> contiguous block but  can I keep the matrix definition as below and still
>> send the entire  matrix in one go?
>>
>> You mean with one standard MPI call?  I don't think so.
>>
>> In MPI, there is a notion of derived datatypes, but I'm not  convinced
>> this is what you want.  A derived datatype is basically a  static template
>> of data and holes in memory.  E.g., 3 bytes, then  skip 7 bytes, then
>> another 2 bytes, then skip 500 bytes, then 1 last  byte.  Something like
>> that.  Your 2d matrices differ in two  respects.  One is that the pattern in
>> memory is different for each  matrix you allocate.  The other is that your
>> matrix definition  includes pointer information that won't be the same in
>> every  process's address space.  I guess you could overcome the first
>>  problem by changing alloc_matrix() to some fixed pattern in memory  for
>> some r and c, but you'd still have pointer information in there  that you
>> couldn't blindly copy from one process address space to  another.
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>

Reply via email to