steveloughran commented on a change in pull request #31495:
URL: https://github.com/apache/spark/pull/31495#discussion_r571948537
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/HDFSMetadataLog.scala
##########
@@ -239,18 +239,35 @@ class HDFSMetadataLog[T <: AnyRef :
ClassTag](sparkSession: SparkSession, path:
.reverse
}
+ private var lastPurgedBatchId: Long = -1L
+
/**
* Removes all the log entry earlier than thresholdBatchId (exclusive).
*/
override def purge(thresholdBatchId: Long): Unit = {
- val batchIds = fileManager.list(metadataPath, batchFilesFilter)
- .map(f => pathToBatchId(f.getPath))
-
- for (batchId <- batchIds if batchId < thresholdBatchId) {
- val path = batchIdToPath(batchId)
- fileManager.delete(path)
- logTrace(s"Removed metadata log file: $path")
+ val possibleTargetBatchIds = (lastPurgedBatchId + 1 until thresholdBatchId)
+ if (possibleTargetBatchIds.length <= 3) {
+ // avoid using list if we only need to purge at most 3 elements
+ possibleTargetBatchIds.foreach { batchId =>
+ val path = batchIdToPath(batchId)
+ if (fileManager.exists(path)) {
Review comment:
delete() is a no-op if the file doesn't exist; it always has to do the
check to see if its there (and whether a file/dir), so wrapping it with another
check is superfluous. Azure abfs will suffer here, HDFS less, but still two
RPCs with one block for NN read access, one for NN r/w. I don't know anything
about HDFS NN locking so can't comment there
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]