This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
     new 7eea3b3  [CARBONDATA-3669] Delete Physical Partition When Drop 
Partition
7eea3b3 is described below

commit 7eea3b3d874d14435970aa664d7f556ff3bad87a
Author: h00424960 <haoxing...@huawei.com>
AuthorDate: Wed Jan 22 14:27:23 2020 +0800

    [CARBONDATA-3669] Delete Physical Partition When Drop Partition
    
    Why is this PR needed?
    When drop partition, hive will clean the dictory and data, but carbondata 
won't, the customers confuse about that. Maybe we should keep same with hive in 
carbondata.
    
    What changes were proposed in this PR?
    When drop partition, set force delete physical partition to true.
    
    Does this PR introduce any user interface change?
    No
    
    Is any new testcase added?
    No
    
    This closes #3590
---
 .../command/partition/CarbonAlterTableDropHivePartitionCommand.scala    | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
index c5a320d..501b861 100644
--- 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/partition/CarbonAlterTableDropHivePartitionCommand.scala
@@ -190,7 +190,7 @@ case class CarbonAlterTableDropHivePartitionCommand(
       
DataMapStoreManager.getInstance().clearDataMaps(table.getAbsoluteTableIdentifier)
     } finally {
       AlterTableUtil.releaseLocks(locks)
-      SegmentFileStore.cleanSegments(table, null, false)
+      SegmentFileStore.cleanSegments(table, null, true)
     }
     Seq.empty[Row]
   }

Reply via email to