HeartSaVioR commented on a change in pull request #27664: [SPARK-30915][SS]
FileStreamSink: Avoid reading the metadata log file when finding the latest
batch ID
URL: https://github.com/apache/spark/pull/27664#discussion_r382451768
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala
##########
@@ -162,6 +162,26 @@ abstract class CompactibleFileStreamLog[T <: AnyRef :
ClassTag](
batchAdded
}
+ /**
+ * Return the latest batch Id.
+ *
+ * This method is a complement of getLatest() - while metadata log file per
batch tends to be
+ * small, it doesn't apply to the compacted log file. This method only
checks for existence of
+ * file to avoid huge cost on reading and deserializing compacted log file.
+ */
+ def getLatestBatchId(): Option[Long] = {
+ val batchIds = fileManager.list(metadataPath, batchFilesFilter)
+ .map(f => pathToBatchId(f.getPath))
+ .sorted(Ordering.Long.reverse)
+ for (batchId <- batchIds) {
Review comment:
I just simply remove reading file here, but as we already get batch IDs from
"listing" files, it may not even need to check for existence. It won't be the
outstanding latency, though.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]