[ 
https://issues.apache.org/jira/browse/HIVE-21706?focusedWorklogId=240810&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240810
 ]

ASF GitHub Bot logged work on HIVE-21706:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 13/May/19 02:17
            Start Date: 13/May/19 02:17
    Worklog Time Spent: 10m 
      Work Description: maheshk114 commented on pull request #620: HIVE-21706: 
REPL Dump with concurrent drop of external table fails with 
InvalidTableException.
URL: https://github.com/apache/hive/pull/620#discussion_r283170843
 
 

 ##########
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##########
 @@ -712,6 +712,64 @@ public void testExternalTableDataPath() throws Exception {
     
assertTrue(dataPath.toUri().getPath().equalsIgnoreCase("/tmp/tmp1/abc/xyz"));
   }
 
+  @Test
+  public void testExternalTablesIncReplicationWithConcurrentDropTable() throws 
Throwable {
+    List<String> dumpWithClause = Collections.singletonList(
+            "'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'"
+    );
+    List<String> loadWithClause = externalTableBasePathWithClause();
+    WarehouseInstance.Tuple tupleBootstrap = primary.run("use " + 
primaryDbName)
+            .run("create external table t1 (id int)")
+            .run("insert into table t1 values (1)")
+            .dump(primaryDbName, null, dumpWithClause);
+
+    replica.load(replicatedDbName, tupleBootstrap.dumpLocation, 
loadWithClause);
+
+    // Insert a row into "t1" and create another external table using data 
from "t1".
+    primary.run("use " + primaryDbName)
+            .run("insert into table t1 values (2)")
+            .run("create external table t2 as select * from t1");
+
+    // Inject a behavior so that getTable returns null for table "t1". This 
ensures the table is
+    // skipped for data files listing.
+    BehaviourInjection<Table, Table> ptnedTableNuller = new 
BehaviourInjection<Table, Table>() {
 
 Review comment:
   ptnedTableNuller is not proper. 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 240810)
    Time Spent: 20m  (was: 10m)

> REPL Dump with concurrent drop of external table fails with 
> InvalidTableException.
> ----------------------------------------------------------------------------------
>
>                 Key: HIVE-21706
>                 URL: https://issues.apache.org/jira/browse/HIVE-21706
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, repl
>    Affects Versions: 4.0.0
>            Reporter: Sankar Hariappan
>            Assignee: Sankar Hariappan
>            Priority: Major
>              Labels: DR, pull-request-available, replication
>         Attachments: HIVE-21706.01.patch, HIVE-21706.02.patch
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> During REPL DUMP of a DB having external tables, if any of the external table 
> is dropped concurrently, then REPL DUMP fails with below exception.
> {code}
> 2019-05-10T06:29:52,092 ERROR [HiveServer2-Background-Pool: Thread-745399]: 
> repl.ReplDumpTask (:()) - failed
> org.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found 
> catalog_sales_new
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1383) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1336) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1316) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1298) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:259)
>  ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>  ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2711) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2382) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2054) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1752) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) 
> ~[hive-exec-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
>  ~[hive-service-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
>  ~[hive-service-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
>  ~[hive-service-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_181]
>         at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181]
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.1.0.31-12.jar:?]
>         at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
>  ~[hive-service-3.1.0.3.1.0.31-12.jar:3.1.0.3.1.0.31-12]
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_181]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_181]
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_181]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_181]
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_181]
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_181]
>         at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to