RussellSpitzer commented on a change in pull request #3557:
URL: https://github.com/apache/iceberg/pull/3557#discussion_r765984452



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/ColumnarBatchReader.java
##########
@@ -78,63 +76,13 @@ public final ColumnarBatch read(ColumnarBatch reuse, int 
numRowsToRead) {
           "Number of rows in the vector %s didn't match expected %s ", 
numRowsInVector,
           numRowsToRead);
 
-      if (rowIdMapping == null) {
-        arrowColumnVectors[i] = 
IcebergArrowColumnVector.forHolder(vectorHolders[i], numRowsInVector);
-      } else {
-        int[] rowIdMap = rowIdMapping.first();
-        Integer numRows = rowIdMapping.second();
-        arrowColumnVectors[i] = 
ColumnVectorWithFilter.forHolder(vectorHolders[i], rowIdMap, numRows);
-      }
+      arrowColumnVectors[i] = batch.hasDeletes() ?

Review comment:
       I think I would pass the vectors directly into the BatchWithFilte so 
everything like this check can be done inside that class as well. Then this 
class just creates the the BatchWith Filter with the proper vectors and then 
call .batch which would return a final columnar Batch from inside the class?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to