Hello,

I am currently modifying a model so that it uses Parallel HDF5, where each 
process writes to a single file. I am having trouble organizing the data in the 
file correctly that have been written from the subdomains. I am hoping that 
some of you may be able to assist me. 

First, here is some information about the model architecture. The number of 
compute processors is always one less than the total number initialized by MPI. 
The master processor (0) is in charge of setting up the subdomains and the time 
scheduling. Because of this, I would like to write the subdomains using only 
the non-master compute processes. For this example, I am using five (5) 
processors, four (4) of which are the compute processes. 

I have a 3D array that is originally (10,10,2). The four processors then break 
it down into (5,5,2) subdomains. I have been battling with whether or not to 
use chunks. I have tried chunks but I cannot seem to get it to work properly 
because of the master processor not containing a subdomain. It is my 
understanding that the H5Pset_chunk MUST have the same values for EVERY 
processor. If that is true, then chunks will not work, especially when the 
subdomains are broken into unequal chunks. Thus, I have been using contiguous 
hyperslabs, but I cannot seem to organize the data properly. I am trying to 
match the organization of a non-Parallel file. Using h5dump, the array from the 
"correct" file prints: 

   DATASET "LEAF_CLASS" {
      DATATYPE  H5T_IEEE_F32LE
      DATASPACE  SIMPLE { ( 10, 10, 2 ) / ( 10, 10, 2 ) }
      DATA {
      (0,0,0): 0, 0,
      (0,1,0): 0, 0,
      (0,2,0): 0, 0,
      (0,3,0): 0, 0,
      (0,4,0): 0, 0,
      (0,5,0): 0, 0,
      (0,6,0): 0, 0,
      (0,7,0): 0, 0,
      (0,8,0): 0, 0,
      (0,9,0): 0, 0,
      (1,0,0): 0, 0,
      (1,1,0): 0, 0,
      (1,2,0): 0, 0,
      (1,3,0): 0, 0,
      (1,4,0): 0, 0,
      (1,5,0): 0, 0,
      (1,6,0): 0, 0,
      (1,7,0): 0, 0,
      (1,8,0): 0, 0,
      (1,9,0): 0, 0,
      (2,0,0): 0, 0,
      (2,1,0): 0, 0,
      (2,2,0): 0, 0,
      (2,3,0): 0, 0,
      (2,4,0): 0, 0,
      (2,5,0): 0, 0,
      (2,6,0): 0, 0,
      (2,7,0): 0, 0,
      (2,8,0): 0, 0,
      (2,9,0): 0, 0,
      (3,0,0): 0, 0,
      (3,1,0): 0, 0,
      (3,2,0): 0, 0,
      (3,3,0): 0, 0,
      (3,4,0): 0, 0,
      (3,5,0): 0, 0,
      (3,6,0): 0, 0,
      (3,7,0): 0, 0,
      (3,8,0): 0, 0,
      (3,9,0): 0, 0,
      (4,0,0): 0, 0,
      (4,1,0): 0, 0,
      (4,2,0): 0, 0,
      (4,3,0): 0, 0,
      (4,4,0): 0, 0,
      (4,5,0): 0, 0,
      (4,6,0): 0, 0,
      (4,7,0): 0, 0,
      (4,8,0): 0, 0,
      (4,9,0): 0, 0,
      (5,0,0): 3, 3,
      (5,1,0): 3, 3,
      (5,2,0): 3, 3,
      (5,3,0): 3, 3,
      (5,4,0): 3, 3,
      (5,5,0): 3, 3,
      (5,6,0): 3, 3,
      (5,7,0): 3, 3,
      (5,8,0): 3, 3,
      (5,9,0): 3, 3,
      (6,0,0): 3, 3,
      (6,1,0): 3, 3,
      (6,2,0): 3, 3,
      (6,3,0): 3, 3,
      (6,4,0): 3, 3,
      (6,5,0): 3, 3,
      (6,6,0): 3, 3,
      (6,7,0): 3, 3,
      (6,8,0): 3, 3,
      (6,9,0): 3, 3,
      (7,0,0): 3, 3,
      (7,1,0): 3, 3,
      (7,2,0): 3, 3,
      (7,3,0): 3, 3,
      (7,4,0): 3, 3,
      (7,5,0): 3, 3,
      (7,6,0): 3, 3,
      (7,7,0): 3, 3,
      (7,8,0): 3, 3,
      (7,9,0): 3, 3,
      (8,0,0): 3, 3,
      (8,1,0): 3, 3,
      (8,2,0): 3, 3,
      (8,3,0): 3, 3,
      (8,4,0): 3, 3,
      (8,5,0): 3, 3,
      (8,6,0): 3, 3,
      (8,7,0): 3, 3,
      (8,8,0): 3, 3,
      (8,9,0): 3, 3,
      (9,0,0): 3, 3,
      (9,1,0): 3, 3,
      (9,2,0): 3, 3,
      (9,3,0): 3, 3,
      (9,4,0): 3, 3,
      (9,5,0): 3, 3,
      (9,6,0): 3, 3,
      (9,7,0): 3, 3,
      (9,8,0): 3, 3,
      (9,9,0): 3, 3
      }
   }

The 0's correspond to the third dimension=0, while the 3's are when the third 
dimension=1. Any help would be MUCH appreciated. 

Thank you!
Rob


____________________________
Robert Seigel, Ph.D.
Colorado State University
Department of Atmospheric Science
1371 Campus Delivery
Fort Collins, CO 80523
(970) 491-8331
[email protected]



_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to