Oh wow!!! really thanks for your answer It is very clear and complete, it will be very useful for my app.
I'm programming using OOP technique and I think that It would be nice to use overload the operator () (based in your MACROS) to acces to elements of dynamic array of my class. Best regards. Daniel. 2011/4/14 J Glassy <[email protected]> Daniel, > > There are lots of granular ways this can be done (using > sophisticated rank, offset, and stride parameters) but user defined > values in memory consumed by HDF5 (and HDF4) I/O functions generally > need to be in contiguous, adjacent addresses. The common stack memory > style examples accomplish this by default, but they suffer a lot from > not being extensible to real world problems due to the severe size > restrictions one can expect when using stack memory. > > So -- most of the intro examples in the HDF5 docs illustrate the > use of HDF5 I/O functions using fixed sized arrays built off stack > memory, not heap memory. The lack of a range of good HDF5 I/O > examples highlighting the use of dynamic (heap) memory to implement > multidimensional arrays (IMHO) is persistant and quite unfortunate, > because almost all real world applications will tend to rely on > dynamic memory off the heap; consequently, a disproportionate fraction > of problems encountered by new users tend to arise from confusion > about this exact issue. > > One simple way around this that guarantees that array slots are > contiguous and adjacent in memory, is to declare a given > multidimensional array as a simple dynamic 1D array, pass this to/from > HDF5 I/O functions as a simple 1D pointer, but make sure to reference > these in your own logic using appropriate 2D and 3D (or higher) > macros. > > As an example, I've included here several macros below (one for a > 2D case, one for a 3D case) that show how this is typically done. In > this case, we assume normal base-0 offsets, where: > "x" refers to a "column", > "y" refers to a "row", and > "z" refers to a 3rd dimension, > "n_y" refers to the number of rows in the array, > "n_x" refers to the number of columns, and > "n_z" refers to the number of 3rd dimension all assuming > row-major-order (RMO) indexing. > > #define ARY_2D_OFFSET_B0(y,x,n_x) (((y)*n_x)+(x)) > #define ARY_3D_OFFSET_B0(z,y,x,n_y,n_x) > (((z)*((n_y)*(n_x)))+((y)*(n_x))+(x)) > > Example ANSI C Snippet > > double *myArray = NULL; /* model the multidimensional array as a ptr > to a 1D array */ > long n_x = 1024; > long n_y = 768; > long n_z = 3; > long nElem = (n_y * n_x * n_z); > long thisRow, thisCol,thisDepth; > long offset = 0 ; > hid_t datasetID = 0 ; /* assign to an appropriate dataset ID somewhere */ > > datasetID = fetchDatasetID(....); > > /* always use calloc() not malloc() since calloc() allocates a > initialized (zero filled) pointer to the array */ > myArray = (double *)calloc(sizeof(double), nElem); > > /* populate it with some values */ > for(thisRow = 0; thisRow < n_y; thisRow++) > for(thisCol = 0; thisCol < n_x ; thisCol++) > for(thisDepth =0; thisDepth < n_z; thisDepth) > { > offset = ARY_3D_OFFSET(thisDepth,thisRow,thisCol,n_y,n_x); > myArray[offset] = someAssignmentFunction(); > } > > /* later, write this 3D array to a HDF5 dataset, or read from it, etc etc > */ > h5_status = H5Dwrite(dtasetID, > H5T_IEEE_F64LE,H5S_ALL,H5S_ALL,H5P_DEFAULT, myArray) ; > > /* later, release heap memory. its nice to not have to partition out > the free() call > individuallly by rank, since we simply use the single ptr to point to > the > head of allocated contiguous memory to free */ > free(myArray); > > Ok, this is just a very rough snippet, and I encourage you to > NOT "cut-and-paste" these sort of snippets directly into your code, > but instead use them to think about how you would like to do this, fit > your own use-context. > > These should however give you the idea, where serialized 1D > dynamic arrays may be generalized to represent arrays of higher > dimensionality, using the same design pattern shown for the 2D and 3D > macros. While cumbersome, here are the 4D and 5D equivalents of these > macros, as further examples. Of course, you could also implement any > of these macros as real C functions instead, which can help debugging > and make them more transparent in how they show up in static analysis > tools like cxref, ctags, etc. > > #define ARY_4D_OFFSET_B0(a,b,c,d,n_b,n_c,n_d) \ > (((a)*(n_b*n_c*n_d))+((b)*(n_c*n_d))+((c)*(n_d))+(d)) > > #define ARY_5D_OFFSET_B0(a,b,c,d,e,n_b,n_c,n_d,n_e) \ > (((a)*(n_b*n_c*n_d))+((b)*(n_c*n_d))+((c)*(n_d))+((d)*(n_e))+(e)) > > > There may be other, simpler ways to do all of this, as others may > submit... ....HTH. > > Joe > > On Thu, Apr 14, 2011 at 10:38 AM, Daniel Cervantes <[email protected]> > wrote: > > Hello everybody I am new using HDF5 and I have a problem using dynamic > > memory array reading a data set. > > > > When I read my data set using static arrays (like in the all examples in > > documentation ) all it is right > > > > data_out[n][m][l]; > > dataSet.read(data_out, PredType::NATIVE_DOUBLE,dataSpace); > > > > the array dada_out have all numbers in the data set well. > > > > However if I define data_out using dynamic memory, i.e. > > > > double *** data_out; > > .... > > > > data_out = new double**[n]; > > for(int i=0; i < n; ++i) > > data_out[i] = new double *[m]; > > > > for(int i=0; i < n; ++i) > > for(int j=0; j < m; ++j) > > data_out[i][j] = new double [l]; > > > > > > dataSet.read(data_out[0][0], PredType::NATIVE_DOUBLE,dataSpace); > > > > only the first row (data_out[0][0][1..l]) have the correct numbers, and > the > > another elements are in a wrong order or they are 0. > > > > Anyone know what is the problem ? > > Anyone have a simple example using a dynamic memory arrays ? > > > > Best regards. > > Daniel. > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Hdf-forum is for HDF software users discussion. > > [email protected] > > http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org > > > > > > > ------------------------------------------------------------ > Joseph Glassy > Lead Software Engineer (contractor) > NASA Measures (Freeze/Thaw),Rm CFC 424 > College of Forestry and Conservation > Univ. Montana, Missoula, MT 59812 > > Lupine Logic Inc. > www.lupinelogic.com > Scientific and Technical Programming > > _______________________________________________ > Hdf-forum is for HDF software users discussion. > [email protected] > http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org >
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
