ajantha-bhat commented on a change in pull request #3854:
URL: https://github.com/apache/carbondata/pull/3854#discussion_r459532662



##########
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/secondaryindex/events/CleanFilesPostEventListener.scala
##########
@@ -54,7 +60,70 @@ class CleanFilesPostEventListener extends 
OperationEventListener with Logging {
           SegmentStatusManager.deleteLoadsAndUpdateMetadata(
             indexTable, true, partitions.map(_.asJava).orNull)
           CarbonUpdateUtil.cleanUpDeltaFiles(indexTable, true)
+          cleanUpUnwantedSegmentsOfSIAndUpdateMetadata(indexTable, carbonTable)
         }
     }
   }
+
+  /**
+   * This method added to clean the segments which are success in SI and may 
be compacted or marked
+   * for delete in main table, which can happen in case of concurrent 
scenarios.
+   */
+  def cleanUpUnwantedSegmentsOfSIAndUpdateMetadata(indexTable: CarbonTable,
+      mainTable: CarbonTable): Unit = {
+    val mainTableStatusLock: ICarbonLock = CarbonLockFactory
+      .getCarbonLockObj(mainTable.getAbsoluteTableIdentifier, 
LockUsage.TABLE_STATUS_LOCK)
+    val indexTableStatusLock: ICarbonLock = CarbonLockFactory
+      .getCarbonLockObj(indexTable.getAbsoluteTableIdentifier, 
LockUsage.TABLE_STATUS_LOCK)
+    var mainTableLocked = false
+    var indexTableLocked = false
+    try {
+      mainTableLocked = mainTableStatusLock.lockWithRetries()

Review comment:
       Atleast add an error log of unable to get lock, so that the user will 
know that something happened and need to retry.
   
   User tries multiple times in concurrent scenario and it won't clean due to 
lock issue. He will never know why it is not cleaned. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to