/makes me believing that the write-method always writes the
data directly to disk. So I was wondering if it is possible to collect data
in an internal buffer of the dataset which will be written to disk only
after several seconds in a separate thread (so that new data can still be
collected while the bunch of old collected data is being written to disk).
Or maybe another solution will fit better?/

I'm in the same boat and I've found a few things:

1) Set the allocation time: H5Pset_alloc_time(cparms, H5D_ALLOC_TIME_INCR); 
(cparms is the id of the property list associated with dataset creation)

2) Set the chunk cache size (VERY IMPORTANT) as the default buffer is 1 MB
My off-the-cuff recommendation is to start with a buffer size of at least
50% of your useable RAM (I use about 80%).

Example:
accessparms = H5Pcreate(H5P_DATASET_ACCESS); // get the access parameters
for the dataset
status = H5Pset_chunk_cache(accessparms,58757,6000000000,1);

The second parameter of the set chunk cache function is the number of cache
slots. Per the API, I recommend determining the # of chunks that can fit in
the cache (in this case 6GB), multiple by 100, and find the nearest prime #
to this. Obscure, I know, but check out the API for more info.

Hopefully this will get you underway. Using this method, you will pretty
much do all your writing to RAM until it starts to fill up.

Thanks,
C


--
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/Tuning-HDF5-for-big-transfer-rates-tp739280p3636085.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to