Hi all, I'm working on a project to read data from multiple HDF5 files for analysis. Each file consists of 5 floating point datasets (each being 2000x2200 in size) and there's between 100 and 120 files to read. At the moment my code reads all the data from all the files into memory at once, which is nice and simple but because of memory constraints I end up using a lot of virtual memory....which is rather slow.
I tried reading a hyperslab of each dataset (corresponding of 2000 elements) from each file, but that turned out to be even slower than reading all the data at once. So, do you have any suggestions as to the best way to read this data? Aside from getting more memory for the computer! All the best, Simon. _______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
