rdblue commented on a change in pull request #1334:
URL: https://github.com/apache/iceberg/pull/1334#discussion_r470758731



##########
File path: data/src/main/java/org/apache/iceberg/data/orc/GenericOrcWriters.java
##########
@@ -231,7 +231,24 @@ public void nonNullWrite(int rowId, String data, 
ColumnVector output) {
 
     @Override
     public void nonNullWrite(int rowId, ByteBuffer data, ColumnVector output) {
-      ((BytesColumnVector) output).setRef(rowId, data.array(), 0, 
data.array().length);
+      // We technically can't be sure if the ByteBuffer coming in is on or off
+      // heap so we cannot safely call `.array()` on it without first checking
+      // via the method ByteBuffer.hasArray().
+      // See: https://errorprone.info/bugpattern/ByteBufferBackingArray
+      //
+      // When there is a backing heap based byte array, we avoided the 
overhead of
+      // copying, which is especially important for small byte buffers.
+      //
+      // TODO - This copy slows it down, perhap unnecessarily. Is there any 
other way to tell, or no?
+      //        My guess is no, if I consider things like VectorizedOrcReaders 
on Spark.
+      if (data.hasArray()) {
+        ((BytesColumnVector) output).setRef(rowId, data.array(), 0, 
data.array().length);

Review comment:
       This is not a correct use of `ByteBuffer` because it doesn't use 
`arrayOffset` or `remaining`. I think the current version must work because 
Spark returns new arrays that we wrap in `ByteBuffer`, but if the goal here is 
to make this accept any `ByteBuffer` then we should account for cases where the 
buffer is not simply the entire backing array.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to