Vishwanatha-HD commented on PR #48195:
URL: https://github.com/apache/arrow/pull/48195#issuecomment-3570730990

   > The patch tackles the same corner of the bit-stream utilities, but the 
handling on big-endian ends up taking a pretty different route. In this PR, the 
generic unpack paths stay active on BE, with the cached-word machinery still 
steering most reads. That works fine on little-endian, but those helpers lean 
on assumptions about word layout that are trickier to uphold on BE even with 
the surrounding byte-swaps.
   > 
   > In my version, the BE code path steps around those assumptions entirely: 
VLQ parsing pulls straight from the underlying buffer, and the bulk bit 
extraction uses the simpler, portable reader rather than the wide 32/64-bit 
fast paths. It costs a few cycles, but it keeps the behavior identical across 
hosts without depending on how the cached words line up.
   > 
   > Nothing here looks wildly off, but you can see the philosophical split: 
this PR keeps the optimized hot paths alive everywhere, while the alternative 
narrows the surface area on BE so the byte order never has a chance to get 
involved.
   
   @k8ika0s.. I appreciate all your review comments and the time that you have 
spent in reviewing my code changes.. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to