[ 
https://issues.apache.org/jira/browse/HIVE-25602?focusedWorklogId=663760&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-663760
 ]

ASF GitHub Bot logged work on HIVE-25602:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Oct/21 20:40
            Start Date: 11/Oct/21 20:40
    Worklog Time Spent: 10m 
      Work Description: pkumarsinha commented on a change in pull request #2707:
URL: https://github.com/apache/hive/pull/2707#discussion_r726553964



##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestScheduledReplicationScenarios.java
##########
@@ -251,6 +253,97 @@ public void testExternalTablesReplLoadBootstrapIncr() 
throws Throwable {
     }
   }
 
+  @Test
+  public void testCompleteFailoverWithReverseBootstrap() throws Throwable {
+    String withClause =
+            "'" + HiveConf.ConfVars.HIVE_IN_TEST + "' = 'true'" + ",'"
+                    + HiveConf.ConfVars.REPL_SOURCE_CLUSTER_NAME + "' = 
'cluster0'"

Review comment:
       Why is cluster name required in with clause? Is it used during fail-over 
process?

##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestScheduledReplicationScenarios.java
##########
@@ -251,6 +253,97 @@ public void testExternalTablesReplLoadBootstrapIncr() 
throws Throwable {
     }
   }
 
+  @Test
+  public void testCompleteFailoverWithReverseBootstrap() throws Throwable {
+    String withClause =
+            "'" + HiveConf.ConfVars.HIVE_IN_TEST + "' = 'true'" + ",'"
+                    + HiveConf.ConfVars.REPL_SOURCE_CLUSTER_NAME + "' = 
'cluster0'"
+                    + ",'" + HiveConf.ConfVars.REPL_TARGET_CLUSTER_NAME
+                    + "' = 'cluster1'";
+
+    // Create a table with some data at source DB.
+    primary.run("use " + primaryDbName).run("create table t2 (id int)")
+            .run("insert into t2 values(1)").run("insert into t2 values(2)");
+
+    // Schedule Dump & Load and verify the data is replicated properly.
+    try (ScheduledQueryExecutionService schqS = ScheduledQueryExecutionService
+            .startScheduledQueryExecutorService(primary.hiveConf)) {
+      int next = -1;
+      ReplDumpWork.injectNextDumpDirForTest(String.valueOf(next), true);
+      primary.run("create scheduled query repl_dump_p1 every 5 seconds as repl 
dump "
+              + primaryDbName +  " WITH(" + withClause + ')');

Review comment:
       What dump directory is used here for both set of policies, p1 & p2? We 
should have tests for both these cases. Also, failback ideally should also be 
covered as a part of these test as that would help ascertain the full 
functioning. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 663760)
    Time Spent: 20m  (was: 10m)

> Fix failover metadata file path in repl load execution.
> -------------------------------------------------------
>
>                 Key: HIVE-25602
>                 URL: https://issues.apache.org/jira/browse/HIVE-25602
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Haymant Mangla
>            Assignee: Haymant Mangla
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> When executed through scheduled queries, repl load fails with following error:
>  
> {code:java}
> Reading failover metadata from file:
> 2021-10-08 02:02:51,824 ERROR org.apache.hadoop.hive.ql.Driver: [Scheduled 
> Query Executor(schedule:repl_load_p1, execution_id:43)]: FAILED: 
> SemanticException java.io.FileNotFoundException: File does not exist: 
> /user/hive/repl/c291cmNl/36d04dfd-ee5d-4faf-bc0a-ae8d665f95f9/_failovermetadata
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:87)
>  at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:77)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:159)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:737)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:454)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to