Hi Quincy,

On Tue, Oct 12, 2010 at 02:28:41PM -0500, Quincey Koziol wrote:
>       Hmm, sorry, I missed that you wanted to check for a soft link.  You
> need to use H5Lget_info() for that.

Entirely my fault since I now realize that I didn't say explicitly
that it's about a symbolic link in my original post. And thanks at
lot, H5Lget_info() looks exactly like what I was looking for!

> >>> Finally, there's another thing perhaps someone can help me
> >>> with: I tried to create some 120.000 1D data sets, about
> >>> 200 bytes large and each in it's own group. This resulted
> >>> in a huge overhead in the file: instead of the expected file
> >>> size of arond 24 MB (of course plus a bit for overhead) the
> >>> files were about 10 times larger than expected. Using a number
> >>> (30) of 2D data sets (with 4000 rows) took care of this but I
> >>> am curious why this makes such a big difference.
> >> 
> >>    Did you create them as chunked datasets?  And, what were the dimensions
> >> of the chunk sizes you used?
> > 
> > No, those were simple 1-dimensional data sets, written out in a
> > single call immediately after creation and then closed. Perhps
> > having them all in their own group makes a difference? What I
> > noticed was that h5dump on the resulting file told me under
> > Storage information/Groups that for B-tree/List about 140 MB
> > were used...
> 
>       This is very weird, can you send a sample program that shows this 
> result?

As usual this happened within a larger program;-( I will try to
cobble something together that does the same (I hope I have the
version that exhibited the problem somewhere in my version sys-
tem and can just strip it down enough). Please give me a bit of
time, it may take a day or even bit more...

                Thank you very much and best regards, Jens
-- 
  \   Jens Thoms Toerring  ________      [email protected]
   \_______________________________      http://toerring.de

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to