Hello Thorben,

So, I am not sure I completely understood the part of this thread
dealing with /dev/shm. However, did you in fact try using HDF5-1.8.4 and
reading the file with the CORE vfd (H5Pset_fapl_core) and setting your
allocation inc to say 32*(1<<32) (thats 32 gig and just a little bigger
than your file) and then opening the file with that fapl? I'd expect
that to have helped a lot.

Mark

On Wed, 2010-03-03 at 06:13, Thorben Kröger wrote:
> On Wednesday 03 March 2010 14:25:17 Thorben Kröger wrote:
> > > > Though the 30GB file I was talking of was written using HDF5 1.8.4, if
> > > > I understand correctly, it will not make use of these new features
> > > > because it tries to maintain downward compatibility to 1.6. Correct?
> > > 
> > > Yes, you should enable the new link storage by calling
> > > H5Pset_libver_bounds() explicitly.  IMO, this would be the best path to
> > > improve performance for your scenario
> > 
> > So can I convert my existing file to the 1.8-only file version without
> > having to rerun the program that generated it? That would take a week I
> > think :-(
> 
> So, I've found the "h5repack" command line tool now that accepts the "--
> latest" options, and am running this now on my file like this:
> 
> h5repack --latest oldfile.h5 newfile.h5
> 
> Will report back about possible performance improvements...
> 
> Cheers,
> Thorben
> 
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://*mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
-- 
Mark C. Miller, Lawrence Livermore National Laboratory
================!!LLNL BUSINESS ONLY!!================
[email protected]      urgent: [email protected]
T:8-6 (925)-423-5901     M/W/Th:7-12,2-7 (530)-753-851


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to