flyrain commented on code in PR #4588:
URL: https://github.com/apache/iceberg/pull/4588#discussion_r950762704


##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/ColumnarBatchReader.java:
##########
@@ -183,6 +187,8 @@ Pair<int[], Integer> 
buildPosDelRowIdMapping(PositionDeleteIndex deletedRowPosit
           currentRowId++;
         } else if (hasIsDeletedColumn) {
           isDeleted[originalRowId] = true;
+        } else {
+          deletes.incrementDeleteCount();
         }

Review Comment:
   It is triggered by either projection or filtering. Here are two examples:
   ```
   select name, _deleted from student
   ```
   It returns both deleted and un-delete rows.
   ```
   select name from student where _deleted=true
   ```
   It returns deleted rows.
   
   The way I saw it is that the delete count metric is orthogonal to the 
metadata column `_deleted`. It reports how many row-level deletes applied as 
its label indicates, `public static final String DISPLAY_STRING = "number of 
row deletes applied"`. I don’t have strong option on it though. What do you 
think? BTW, the non-vectorized read also didn’t report the metric in case of 
`_deleted` column, see line 255 in class `DeleteFilter`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to