RjLi13 commented on code in PR #15059:
URL: https://github.com/apache/iceberg/pull/15059#discussion_r2714512364
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkMicroBatchStream.java:
##########
@@ -316,182 +243,43 @@ private static StreamingOffset
determineStartingOffset(Table table, Long fromTim
}
}
- private static int getMaxFiles(ReadLimit readLimit) {
- if (readLimit instanceof ReadMaxFiles) {
- return ((ReadMaxFiles) readLimit).maxFiles();
- }
-
- if (readLimit instanceof CompositeReadLimit) {
- // We do not expect a CompositeReadLimit to contain a nested
CompositeReadLimit.
- // In fact, it should only be a composite of two or more of ReadMinRows,
ReadMaxRows and
- // ReadMaxFiles, with no more than one of each.
- ReadLimit[] limits = ((CompositeReadLimit) readLimit).getReadLimits();
- for (ReadLimit limit : limits) {
- if (limit instanceof ReadMaxFiles) {
- return ((ReadMaxFiles) limit).maxFiles();
- }
+ @Override
+ public Map<String, String> metrics(Optional<Offset> latestConsumedOffset) {
Review Comment:
Any recommendations on how to isolate it to Async Planner? Else I will
remove it, and users can rely on logging instead, which will minimize
implementing on ReportsSourceMetrics Interface
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]