Are there guidelines for optimizing the speed of data set writes?

In my case I have different threads writing to one file/thread. Each writes to 
a few packet tables with an associated packet table data scale (time axis) with 
data coming from a real time streaming instrument. There are a handful of 
different instrument types involved. I'm having trouble writing the fastest of 
these instruments as fast as the data comes.

The data types are primarily composite structures. There are a few arrays which 
aren't terribly big (<250 elements) and are 1D, non-sparse.

Does the Packet Table library set chunking or other parameters already that 
aren't otherwise specified by the underlying dataset?


Scott

________________________________
This e-mail and any files transmitted with it may be proprietary and are 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this e-mail in error please notify the sender.
Please note that any views or opinions presented in this e-mail are solely 
those of the author and do not necessarily represent those of ITT Corporation. 
The recipient should check this e-mail and any attachments for the presence of 
viruses. ITT accepts no liability for any damage caused by any virus 
transmitted by this e-mail.
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to