That might be an interesting test to run.

If you're comparing modeling data as a bunch of 4-byte integers vs a bunch of 4-byte hexBinary elements, I imagine there wouldn't be much of a difference. Integers are implemented by reading a 4-byte array and then converting that array to an int using some bit operations. HexBinary is implemented the same way, but without the bit operations. So there's maybe a little overhead with integers, but bit operations are pretty fast so it's probably fairly small. I also imagine any overhead disappears once you factor in the overhead of converting those to text for an infoset, which is relatively slow.

However, if you're talking about reading a bunch of 4-byte integers vs a single really big hexBinary element, the hexBinary is going to win hands down--a single big read/write should be much faster and the infoset is going to be much less complex and faster to create.

That said, I probably wouldn't pick hexBinary vs integer for performance reasons. You should pick the right type for the data. If a piece of data is an unsigned integer but you parse it as hexBinary, it makes inspection/transformation/validation much more difficult, if not impossible, which is one of the main reasons to convert data to an infoset in the first place.

Hex binary should really only be used if you don't know what the underlying data is, such that the only option is to treat it as an unknown blob of hexBinary bytes.

On 2024-02-27 04:15 PM, Larry Barber wrote:
Is there a significant processing time difference in the parse/unparse of elements of type hexBinary vs unsignedint?


Reply via email to