Github user a-roberts commented on the pull request:
https://github.com/apache/spark/pull/10628#issuecomment-204397652
@nongli @davies
Hi, by implementing the feature this way we've eliminated every big-endian
user from using this feature with Spark - Z systems, for example, or Power
big-endian machines. Critical if we want Spark to be used by large industries
such as those that IBM et al support.
Any intention to have this not rely on the byte ordering of the platform?
putLittleEndian is not going to cut it when we're wanting to use Spark for
handling large volumes of customer data that traditionally sits on big-endian
systems.
Also, are there any design docs for this? I found the documentation for
Tungsten (UnsafeRows) to be very useful for understanding and debugging
purposes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]