maheshk114 commented on a change in pull request #578: HIVE-21471: Replicating 
conversion of managed to external table leaks HDFS files at target.
URL: https://github.com/apache/hive/pull/578#discussion_r268132700
 
 

 ##########
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
 ##########
 @@ -192,12 +197,12 @@ public void alterTable(RawStore msdb, Warehouse wh, 
String catName, String dbnam
       // 2) the table is not an external table, and
       // 3) the user didn't change the default location (or new location is 
empty), and
       // 4) the table was not initially created with a specified location
-      if (rename
-          && !oldt.getTableType().equals(TableType.VIRTUAL_VIEW.toString())
-          && (oldt.getSd().getLocation().compareTo(newt.getSd().getLocation()) 
== 0
-            || StringUtils.isEmpty(newt.getSd().getLocation()))
-          && !MetaStoreUtils.isExternalTable(oldt)) {
-        Database olddb = msdb.getDatabase(catName, dbname);
+      if (replDataLocationChanged
+              || (rename
 
 Review comment:
   i think in case of non txn table ..if the location is changed then rename 
..if its txn table  then delete the directory in replication flow. For normal 
flow, txn table, control  should not come till here ... it should fail in hive 
server it self 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to