Github user nongli commented on the pull request:

    https://github.com/apache/spark/pull/10628#issuecomment-204503727
  
    This is not an architectural limitation. We have just not implemented 
support for it and there are only a few methods that would need to be 
implemented to support big endian. It would be great if someone in the 
community working on big endian hardware could do this. 
    
    We don't rely on the byte ordering of the platform. The function you are 
referring to, putIntLittleEndian, is converting the input, which is little 
endian, to whatever the machine's endianness is. It doesn't specify what the 
host endian has to be. This is used in cases where the data is encoded on disk 
in a canonical binary format. On a big endian host, this would have to byte 
swap but I think that's inevitable as the on disk data had to pick an 
endianness (this is the code that's not implemented right now).
    
    If you find places where spark requires a particular endianness, i'd 
consider that a bug.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to