Stove-hust commented on code in PR #40412:
URL: https://github.com/apache/spark/pull/40412#discussion_r1199893005


##########
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala:
##########
@@ -273,7 +273,7 @@ private[spark] class DiskBlockManager(
       Utils.getConfiguredLocalDirs(conf).foreach { rootDir =>
         try {
           val mergeDir = new File(rootDir, mergeDirName)
-          if (!mergeDir.exists()) {
+          if (!mergeDir.exists() || mergeDir.listFiles().length < 
subDirsPerLocalDir) {

Review Comment:
   I understand what you mean, but I still don't think it's necessary to check 
if the subdirectories match, based on the following two points
   1. when creating a subdirectory, the file system ensures that it will not be 
created repeatedly (`if(!subDir.exists())`)
   2. theoretically, mergeDir is managed directly by each App itself, and there 
will be no subdirectories that don't match our expectations
   Therefore, determining the number of subdirectories is the same as matching 
each subdirectory.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to