I'm implementing an HDF image reader for the ITK library.  ITK has a set
of classes for Image I/O to hide the implementation details of reading and
writing images. One of the itk::ImageIO feature is streamable reading and
writing.

Streaming is meant to allow for reading subsets of large images to reduce
the memory footprint. With a properly implement ImageIO class, you can set
up a processing pipeline that reads parts of the images, processes them,
and writes the parts out, only requiring the memory for the current part
of the image.

This is a big win when processing, for example large time-series.

It looks as though to read or write subsets of an image, you specify the
desired hyperslab, and the same on writing. But according to the
documentation, the hyperslab-based partitioning of datasets is a
'scatter/gather' process that occurs in memory.  This leads me to believe
when you create a new DataSet or open a new Dataset, the memory of the
entire dataset is allocated.

So my question is this: What is the 'HDF5 Way' to implement streaming of
smaller chunks of a dataset?

--
Kent Williams [email protected]




________________________________
Notice: This UI Health Care e-mail (including attachments) is covered by the 
Electronic Communications Privacy Act, 18 U.S.C. 2510-2521, is confidential and 
may be legally privileged.  If you are not the intended recipient, you are 
hereby notified that any retention, dissemination, distribution, or copying of 
this communication is strictly prohibited.  Please reply to the sender that you 
have received the message in error, then delete it.  Thank you.
________________________________

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to