rdblue commented on a change in pull request #1334:
URL: https://github.com/apache/iceberg/pull/1334#discussion_r471712739
##########
File path: data/src/main/java/org/apache/iceberg/data/orc/GenericOrcWriters.java
##########
@@ -231,7 +231,24 @@ public void nonNullWrite(int rowId, String data,
ColumnVector output) {
@Override
public void nonNullWrite(int rowId, ByteBuffer data, ColumnVector output) {
- ((BytesColumnVector) output).setRef(rowId, data.array(), 0,
data.array().length);
+ // We technically can't be sure if the ByteBuffer coming in is on or off
+ // heap so we cannot safely call `.array()` on it without first checking
+ // via the method ByteBuffer.hasArray().
+ // See: https://errorprone.info/bugpattern/ByteBufferBackingArray
+ //
+ // When there is a backing heap based byte array, we avoided the
overhead of
+ // copying, which is especially important for small byte buffers.
+ //
+ // TODO - This copy slows it down, perhap unnecessarily. Is there any
other way to tell, or no?
+ // My guess is no, if I consider things like VectorizedOrcReaders
on Spark.
+ if (data.hasArray()) {
+ ((BytesColumnVector) output).setRef(rowId, data.array(), 0,
data.array().length);
Review comment:
If we are accessing the backing array, there is no need to worry about
the state of the `ByteBuffer`. But the actual starting offset in the array is
`data.arrayOffset() + data.position()` and the length is `data.remaining()`.
Have a look at the copy methods in our `ByteBuffers` class to see examples.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]