flyrain commented on code in PR #4588:
URL: https://github.com/apache/iceberg/pull/4588#discussion_r905348730
##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/ColumnarBatchReader.java:
##########
@@ -195,20 +199,8 @@ int[] initEqDeleteRowIdMapping(int numRowsToRead) {
*/
void applyEqDelete() {
Iterator<InternalRow> it = columnarBatch.rowIterator();
- int rowId = 0;
- int currentRowId = 0;
- while (it.hasNext()) {
- InternalRow row = it.next();
- if (deletes.eqDeletedRowFilter().test(row)) {
- // the row is NOT deleted
- // skip deleted rows by pointing to the next undeleted row Id
- rowIdMapping[currentRowId] = rowIdMapping[rowId];
- currentRowId++;
- }
-
- rowId++;
- }
-
+ LOG.debug("Applying equality deletes to row id mapping");
+ int currentRowId = deletes.applyEqDeletes(it, rowIdMapping);
Review Comment:
I'd prefer not to move this logic to `DeleteFilter` since it is more
relevant here. We may expose DeleteFilter's DeleteCounter as you suggested.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]