Are you using a packed struct? We are aware of an issue where using a packed
struct alters the layout of the data. We are working on a fix for that issue.
We think it should work if it is *not* packed.
If this is not the issue, can you send us a sample application that we can use
to reproduce the issue? You can send it
From: Hdf-forum [mailto:hdf-forum-boun...@lists.hdfgroup.org] On Behalf Of
Sent: Monday, September 19, 2016 9:07 AM
Subject: [Hdf-forum] incorrect endianness when writing big-endian data on
I believe that we've encountered a bug in HDF5.
Our application receives data from a socket and writes it to a file using
packet tables. The incoming data is in network byte order (big-endian) and all
of the data types we specify for the packet tables are also the big-endian data
types. We do not do any byte swapping before writing the buffer data, to reduce
When we were using HDF 1.8.14, this produced correct files when running the
application on a little-endian system. We've updated to 1.8.16 and now the
files are incorrect. Specifying big-endian data types causes the data to get
byte-swapped (even though it's already big-endian) and specifying little-endian
data types does not do any byte-swapping. I have also reproduced this problem
using 1.8.17 and 1.10.0 (patch 1). This happens in both Windows and Linux.
I can't find any information in the release notes about this change. We can
revert to using 1.8.14 for now, but we've moved to Visual Studio 2015 for
building in Windows and that means we have to patch the HDF source before we
can build it.
Is there any way to indicate that the buffer being passed to AppendPackets
(we're using the C++ API; the corresponding C function is H5PTappend) is
already big-endian? We cannot allow the overhead of two byte-swap operations
when the incoming data is already in the correct byte order.
[VTI_Inst_logo for email]
5425 Warner Rd. | Suite 13 | Valley View, OH 44125 |
P. +1.216.447.8950 x2011 | F: +1.216.447.8951 |
Hdf-forum is for HDF software users discussion.