HeartSaVioR commented on a change in pull request #31495:
URL: https://github.com/apache/spark/pull/31495#discussion_r571850667
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/HDFSMetadataLog.scala
##########
@@ -239,18 +239,35 @@ class HDFSMetadataLog[T <: AnyRef :
ClassTag](sparkSession: SparkSession, path:
.reverse
}
+ private var lastPurgedBatchId: Long = -1L
+
/**
* Removes all the log entry earlier than thresholdBatchId (exclusive).
*/
override def purge(thresholdBatchId: Long): Unit = {
- val batchIds = fileManager.list(metadataPath, batchFilesFilter)
- .map(f => pathToBatchId(f.getPath))
-
- for (batchId <- batchIds if batchId < thresholdBatchId) {
- val path = batchIdToPath(batchId)
- fileManager.delete(path)
- logTrace(s"Removed metadata log file: $path")
+ val possibleTargetBatchIds = (lastPurgedBatchId + 1 until thresholdBatchId)
+ if (possibleTargetBatchIds.length <= 3) {
+ // avoid using list if we only need to purge at most 3 elements
+ possibleTargetBatchIds.foreach { batchId =>
+ val path = batchIdToPath(batchId)
+ if (fileManager.exists(path)) {
Review comment:
I thought about this a bit more, and I admit I was wrong about the
assumption; log file may not be existed. It could be for some batches after
restarting from checkpoint, but once the query runs more than the threshold
(100 batches by default), the log file "should" exist.
Thanks for raising this. Will make a change and run simple test before
pushing.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]