Hi Jens,

On Oct 10, 2010, at 5:12 PM, Jens Thoms Toerring wrote:

> Hi,
> 
>   I am using the C++ API and trying to figure out how to
> find out what an object that could be either a group or a
> symbolic link really is. Unfortunately it seems that all
> the functions that do that (like H5::CommonFG::getObjinfo())
> are deprecated in 1.8 (and I would prefer not to use any
> deprecated features). At least I haven't been able yet to
> find any function/method that tells me if an object that
> I got a name for is a link. Perhaps I did overlook it,
> so I would be grateful for some hints.
> 
> At the moment I thus resorted to just calling getLinkval()
> and catch the error one gets if the name passed to it isn't
> a link. That works but I noticed something strange: since I
> didn't have a good idea what the second 'size' argument is
> supposed to be good for I simply left it out, relying on the
> default value of 0 (which I guess is supposed to indicate:
> give me the whole name of what the link is pointing to). But
> then i observed that the amount of memory consumed by the
> program jumps up with each call of the function with incre-
> ments in the order of 800 kB to about 7.5 MB. And that memory
> never seems to become deallocated. This doesn't happen when
> instead of leaving the argument out (or passing 0) I use a
> fixed value - in that case memory consumption doesn't change
> at all. This, of course, has the disadvantage that things
> will go wrong badly if I haven't a good guess on the upper
> bould of the length of the name linked to...

        You probably want to use H5Lexists(), along with H5Oget_info() to check 
this sort of thing.

> Finally, there's another thing perhaps someone can help me
> with: I tried to create some 120.000 1D data sets, about
> 200 bytes large and each in it's own group. This resulted
> in a huge overhead in the file: instead of the expected file
> size of arond 24 MB (of course plus a bit for overhead) the
> files were about 10 times larger than expected. Using a number
> (30) of 2D data sets (with 4000 rows) took care of this but I
> am curious why this makes such a big difference.

        Did you create them as chunked datasets?  And, what were the dimensions 
of the chunk sizes you used?

        Quincey


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to