sankarh commented on a change in pull request #578: HIVE-21471: Replicating 
conversion of managed to external table leaks HDFS files at target.
URL: https://github.com/apache/hive/pull/578#discussion_r268381147
 
 

 ##########
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##########
 @@ -673,6 +674,59 @@ public void 
retryIncBootstrapExternalTablesFromDifferentDumpWithoutCleanTablesCo
             ErrorMsg.REPL_BOOTSTRAP_LOAD_PATH_NOT_VALID.getErrorCode());
   }
 
+  @Test
+  public void dynamicallyConvertManagedToExternalTable() throws Throwable {
+    List<String> dumpWithClause = Collections.singletonList(
+            "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'"
+    );
+    List<String> loadWithClause = externalTableBasePathWithClause();
+
+    WarehouseInstance.Tuple tupleBootstrapManagedTable = primary.run("use " + 
primaryDbName)
+            .run("create table t1 (id int)")
+            .run("insert into table t1 values (1)")
+            .run("create table t2 (id int) partitioned by (key int)")
 
 Review comment:
   Nope. It is not a case of migration. It is external table replication 
specific where the table is converted to external at source not at target. So, 
here location is changed only at target not in source.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to