vikramahuja1001 commented on a change in pull request #4072:
URL: https://github.com/apache/carbondata/pull/4072#discussion_r578993995



##########
File path: 
integration/spark/src/main/scala/org/apache/carbondata/trash/DataTrashManager.scala
##########
@@ -72,11 +78,26 @@ object DataTrashManager {
       carbonDeleteSegmentLock = CarbonLockUtil.getLockObject(carbonTable
         .getAbsoluteTableIdentifier, LockUsage.DELETE_SEGMENT_LOCK, 
deleteSegmentErrorMsg)
       // step 1: check and clean trash folder
-      checkAndCleanTrashFolder(carbonTable, isForceDelete)
+      // trashFolderSizeStats(0) contains the size that is freed/or can be 
freed and
+      // trashFolderSizeStats(1) contains the size of remaining data in the 
trash folder
+      val trashFolderSizeStats = checkAndCleanTrashFolder(carbonTable, 
isForceDelete,
+          isDryRun = false)
       // step 2: move stale segments which are not exists in metadata into 
.Trash
       moveStaleSegmentsToTrash(carbonTable)

Review comment:
       No it won't matter, because we need to show how much space can be 
cleared from the trash, when moving from segment folder to the trash folder, we 
will be cleaning it in the next clean files command only, so we don't need to 
put these stats




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to