VenuReddy2103 commented on a change in pull request #3842:
URL: https://github.com/apache/carbondata/pull/3842#discussion_r457648464



##########
File path: 
integration/spark/src/main/scala/org/apache/spark/rdd/CarbonMergeFilesRDD.scala
##########
@@ -157,21 +157,21 @@ object CarbonMergeFilesRDD {
     if (carbonTable.isHivePartitionTable && 
!StringUtils.isEmpty(tempFolderPath)) {
       // remove all tmp folder of index files
       val startDelete = System.currentTimeMillis()
-      val numThreads = Math.min(Math.max(partitionInfo.size(), 1), 10)
-      val executorService = Executors.newFixedThreadPool(numThreads)
-      val carbonSessionInfo = ThreadLocalSessionInfo.getCarbonSessionInfo
-      partitionInfo
-        .asScala
-        .map { partitionPath =>
-          executorService.submit(new Runnable {
-            override def run(): Unit = {
-              ThreadLocalSessionInfo.setCarbonSessionInfo(carbonSessionInfo)
-              FileFactory.deleteAllCarbonFilesOfDir(
-                FileFactory.getCarbonFile(partitionPath + "/" + 
tempFolderPath))
-            }
-          })
+      val allTmpDirs = partitionInfo
+        .asScala.map { partitionPath =>
+          partitionPath + CarbonCommonConstants.FILE_SEPARATOR + tempFolderPath
         }
-        .map(_.get())
+      val allTmpFiles = allTmpDirs.map { partitionDir =>
+          FileFactory.getCarbonFile(partitionDir).listFiles()

Review comment:
       Instead, how about remove these .carbonindex files and .tmp directory in 
`CarbonMergeFilesRDD.internalCompute` itself upon successful generation of 
particular carbonindexmerge ? That would make this cleanup distributed to 
partition level tasks. Also we would have made `listFiles()` before reading the 
index files to generate carbonindexmerge file.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to