[
https://issues.apache.org/jira/browse/ARROW-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15408231#comment-15408231
]
Julien Le Dem commented on ARROW-245:
-------------------------------------
Should we add an endianness=LITTLE|BIG field to the metadata? That way data is
labelled correctly and we can have a check in the RPC layer to make sure we
don't read data with the wrong endianness.
So as mentioned in the thread we can support communication between systems with
the same endianness at no cost without the risk of silent data corruption.
> [Format] Clarify Arrow's relationship with big endian platforms
> ---------------------------------------------------------------
>
> Key: ARROW-245
> URL: https://issues.apache.org/jira/browse/ARROW-245
> Project: Apache Arrow
> Issue Type: Improvement
> Components: Format
> Reporter: Wes McKinney
>
> Per August 2016 mailing list question re: big endian platforms, we have in
> the format document:
> https://github.com/apache/arrow/blob/master/format/Layout.md#byte-order-endianness
> We should clarify that this does not mean that Arrow cannot be used on big
> endian platforms, but rather that the canonical or "in-flight" memory
> representation (for IPC or memory sharing of any kind) is little-endian, so
> big endian systems would need to byte swap big endian integers if they intend
> to expose memory to any other system using Arrow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)