I'm trying to use 1D DMDA's to manage distributed monte carlo computations, and 
I've got the following problem.  If I know, in advance, how many floating 
points are needed for each realization, everything is fine.  But, if I want to 
be able to set this at run time, I can't get it to work.  What I have is:

typedef struct {
  PetscScalar id;
  PetscScalar *x;
} Realization;

then, in the main part of the code:

  Realization *realization_array;
  DM sim_data_da;
  Vec sim_data_vec;

  PetscInt xsize=10, realization_dof, i;
  PetscOptionsGetInt(NULL, "-xsize", &xsize, NULL);

realization_dof   = xsize + 1;


  DMDACreate1d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE, batch_size, 
realization_dof, 0, NULL, &sim_data_da);
  DMCreateGlobalVector(sim_data_da,&sim_data_vec);

  DMDAVecGetArray(sim_data_da, sim_data_vec, & realization_array);

  Up to this point, I have no problem, but, when I try to access 
realization_array[i].x[j], I get memory errors.  Is this fundamentally 
unworkable, or  is there a fix?

-gideon

Reply via email to