I hear you guys need funding regularly but I thought DOE/Whamcloud/Cray
gave you guys a healthy chunk of change to fix alot of long standing issues
that would be very beneficial to get done.

Things I've been noting during my usage of HDF5:

   - No support for filters on vlen types, in particular - compression
   - No atomic transactions for writing new data (ie you crash, your
   document can be corrupted because of metadata issues)
   - Usage of a global mutex for most any routine, which can cause severe
   performance degradation if multiple threads are concurrently doing IO... (I
   don't want to hear about studies saying about how IO itself is the bigger
   problem... there are ram filesystems, SSDs, and different storage locations
   to invalidate these claims, all easy things to come by in HPC)
   - Mediocre examples that don't really show you how to get things done or
   clarify all that much.  Reading source is often the only way to get an
   answer, I've found.  It makes the learning curve appear huge to newbies who
   I've introduced to HDF.


I love HDF as it really addresses write once, use anywhere and gives me
something to deal with long term storage of custom binary data formats. I
hope it continues to spread throughout the world as the defacto data
storage format for pretty much anything between embedded systems to HPC in
all it's spaces. I also hope in the future more things will be leaving the
powerpoint slides / R&D and become production ready. I hope you guys get
everything you need to improve areas HDF is still lacking in.

-Jason


On Fri, Jan 17, 2014 at 8:46 AM, Elena Pourmal <[email protected]>wrote:

> Andrea,
>
> On Jan 16, 2014, at 3:37 PM, Andrea Bedini <[email protected]>
> wrote:
>
> Hi,
>
> in a post few years ago [1] Quincey Koziol explained that the VL data is
> stored in a "global heap" in
> the file, which is not compressed. He also mentioned that a new "fractal
> heap" code was being developed (which, I assume, would allow compression of
> VL data).
>
> Is there any news on this front? I there a way to compress VL data?
>
> No news. We need funding to implement compression of VL data. If any
> organization is willing to sponsor the feature, please contact us at
> [email protected]
>
> Thank you!
>
> Elena
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Elena Pourmal
> Director of Technical Services and Operations
> The HDF Group
> 1800 So. Oak St., Suite 203,
> Champaign, IL 61820
> www.hdfgroup.org
> (217)531-6112 (office)
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Thanks,
> Andrea
>
> [1]
> http://hdf-forum.184993.n3.nabble.com/hdf-forum-Compression-in-variable-length-datasets-not-working-td194091.html
>
> --
> Andrea Bedini <[email protected]>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
>
> http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
>
> http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

Reply via email to