Another option is to create a Python object - dict, list, or whatever works
- containing the metadata and then store a pickled version of it in a
PyTables array.  It's nice for this sort of thing because you have the full
flexibility of Python's data containers.

For example, if the Python object is called 'fit', then
numpy.frombuffer(pickle.dumps(fit), 'u1') will pickle it and convert the
result to a NumPy array of unsigned bytes.  It can be stored in a PyTables
array using a UInt8Atom.  To retrieve the Python object, just use
pickle.loads(hdf5_file.root.data_1.fit[:]).

It gets a little more complicated if you want to be able to modify the
Python object, because the length of the pickle will change.  In that case,
you can use an EArray (for the case when the pickle grows), and store the
number of bytes as an attribute.  Storing the number of bytes handles the
case when the pickle shrinks and doesn't use the full length of the on-disk
array.  To load it, use
pickle.loads(hdf5_file.root.data_1.fit[:num_bytes]), where num_bytes is the
previously stored attribute.  To modify it, just overwrite the array with
the new version, expanding if necessary, then update the num_bytes
attribute.

Using a PyTables VLArray with an 'object' atom uses a similar technique
under the hood, so that may be easier.  It doesn't allow resizing though.

Hope that helps,
Josh



On Tue, Jun 25, 2013 at 1:33 AM, Andreas Hilboll <li...@hilboll.de> wrote:

> On 25.06.2013 10:26, Andre' Walker-Loud wrote:
> > Dear PyTables users,
> >
> > I am trying to figure out the best way to write some metadata into some
> files I have.
> >
> > The hdf5 file looks like
> >
> > /root/data_1/stat
> > /root/data_1/sys
> >
> > where "stat" and "sys" are Arrays containing statistical and systematic
> fluctuations of numerical fits to some data I have.  What I would like to
> do is add another object
> >
> > /root/data_1/fit
> >
> > where "fit" is just a metadata key that describes all the choices I made
> in performing the fit, such as seed for the random number generator, and
> many choices for fitting options, like initial guess values of parameters,
> fitting range, etc.
> >
> > I began to follow the example in the PyTables manual, in Section 1.2
> "The Object Tree", where first a class is defined
> >
> > class Particle(tables.IsDescription):
> >       identity = tables.StringCol(itemsize=22, dflt=" ", pos=0)
> >       ...
> >
> > and then this class is used to populate a table.
> >
> > In my case, I won't have a table, but really just want a single object
> containing my metadata.  I am wondering if there is a recommended way to do
> this?  The "Table" does not seem optimal, but I don't see what else I would
> use.
>
> For complex information I'd probably indeed use a table object. It
> doesn't matter if the table only has one row, but still you have all the
> information there nicely structured.
>
> -- Andreas.
>
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by Windows:
>
> Build for Windows Store.
>
> http://p.sf.net/sfu/windows-dev2dev
> _______________________________________________
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/pytables-users
>
------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to