>From [email protected] Fri Jan 13 05:26:44 2012
>From: Balint Takacs <[email protected]>
>To: [email protected]
>
>Hi all,
>
>Does the set_cache function have any effect with contiguously stored
>datasets, or it works only for chunked ones?
>
>If the latter, can the same caching functionality achieved by
>set_sieve_buffer_size when using the default sec2 driver?
>
>I am planning to use a large cache, like 0.5GB.
>
>OS is Linux and the HDF version is 1.6.5. I know it is old, but I have no
>option to upgrade.
>
>Thank you for your answers!
>
>Balint
>_______________________________________________
>Hdf-forum is for HDF software users discussion.
>[email protected]
>http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Hi Balint,

   You question is a bit outside my area of expertise, but since
no one else has responded, I'll take a crack at it.  

   In 1.6.5, H5Pset_cache() allows you to configure the chunk 
cache only (the metadata cache was redesigned and re-implemented in 
1.6.4, and API calls to configure it were not added until 1.8).

   While I haven't had occasion to work on the chunk cache code, it 
is my understanding that the configuration of the chunk cache only 
effects I/O for chunked data sets.

   As to chunk cache size.  While I am not sure exactly what 
happens in 1.6.5, at least some version of HDF5 created a separate 
chunk cache for each open data set.  Thus if you use a large chunk 
cache, you will want to watch your memory footprint if you open 
multiple chunked data sets simultaneously.

   I'm afraid I don't know enough about H5Pset_sieve_buffer_size()
to comment without digging into the code.

   I hope this helps.

                                         Best regards,

                                         John Mainzer

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to