wypoon commented on code in PR #4588:
URL: https://github.com/apache/iceberg/pull/4588#discussion_r889385568
##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/ColumnarBatchReader.java:
##########
@@ -195,20 +199,8 @@ int[] initEqDeleteRowIdMapping(int numRowsToRead) {
*/
void applyEqDelete() {
Iterator<InternalRow> it = columnarBatch.rowIterator();
- int rowId = 0;
- int currentRowId = 0;
- while (it.hasNext()) {
- InternalRow row = it.next();
- if (deletes.eqDeletedRowFilter().test(row)) {
- // the row is NOT deleted
- // skip deleted rows by pointing to the next undeleted row Id
- rowIdMapping[currentRowId] = rowIdMapping[rowId];
- currentRowId++;
- }
-
- rowId++;
- }
-
+ LOG.debug("Applying equality deletes to row id mapping");
+ int currentRowId = deletes.applyEqDeletes(it, rowIdMapping);
Review Comment:
Yes and no. There doesn't seem to be a natural way to get the
`DeleteCounter` to `ColumnarBatchReader`. In this method, the `rowIdMapping`
int array is updated by applying the equality deletes in the `DeleteFilter`
`deletes`. As each row is tested, if it is deleted, we need to increment the
`DeleteCounter`. The `DeleteFilter` has the `DeleteCounter`, so I moved the
logic completely over to `DeleteFilter`. The alternative is to expose
`DeleteFilter`'s `DeleteCounter` so that it can be used here, but I think it is
better to encapsulate that in `DeleteFilter`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]