[ 
https://issues.apache.org/jira/browse/HIVE-24187?focusedWorklogId=487986&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487986
 ]

ASF GitHub Bot logged work on HIVE-24187:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 22/Sep/20 06:47
            Start Date: 22/Sep/20 06:47
    Worklog Time Spent: 10m 
      Work Description: aasha commented on a change in pull request #1515:
URL: https://github.com/apache/hive/pull/1515#discussion_r492506493



##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java
##########
@@ -1604,6 +1605,122 @@ public void testRangerReplication() throws Throwable {
         .verifyResults(new String[] {"1", "2"});
   }
 
+  @Test
+  public void testHdfsNamespaceLazyCopy() throws Throwable {
+    List<String> clause = getHdfsNameserviceClause();
+    clause.add("'" + 
HiveConf.ConfVars.REPL_DUMP_METADATA_ONLY_FOR_EXTERNAL_TABLE.varname + 
"'='true'");
+    primary.run("use " + primaryDbName)
+            .run("create table  acid_table (key int, value int) partitioned by 
(load_date date) " +
+                    "clustered by(key) into 2 buckets stored as orc 
tblproperties ('transactional'='true')")
+            .run("create table table1 (i int)")
+            .run("insert into table1 values (1)")
+            .run("insert into table1 values (2)")
+            .run("create external table ext_table1 (id int)")
+            .run("insert into ext_table1 values (3)")
+            .run("insert into ext_table1 values (4)")
+            .dump(primaryDbName, clause);
+
+    try{
+      replica.load(replicatedDbName, primaryDbName, clause);
+      Assert.fail("Expected the UnknownHostException to be thrown.");
+    } catch (IllegalArgumentException ex) {
+      assertTrue(ex.getMessage().contains("java.net.UnknownHostException: 
nsRemote"));
+    }
+  }
+
+  @Test
+  public void testHdfsNamespaceLazyCopyIncr() throws Throwable {
+    ArrayList clause = new ArrayList();
+    clause.add("'" + 
HiveConf.ConfVars.REPL_DUMP_METADATA_ONLY_FOR_EXTERNAL_TABLE.varname + 
"'='true'");
+    primary.run("use " + primaryDbName)
+            .run("create table  acid_table (key int, value int) partitioned by 
(load_date date) " +
+                    "clustered by(key) into 2 buckets stored as orc 
tblproperties ('transactional'='true')")
+            .run("create table table1 (i String)")
+            .run("insert into table1 values (1)")
+            .run("insert into table1 values (2)")
+            .run("create external table ext_table1 (id int)")
+            .run("insert into ext_table1 values (3)")
+            .run("insert into ext_table1 values (4)")
+            .dump(primaryDbName);
+
+    replica.load(replicatedDbName, primaryDbName, clause)
+            .run("use " + replicatedDbName)
+            .run("show tables")
+            .verifyResults(new String[] {"acid_table", "table1", "ext_table1"})
+            .run("select * from table1")
+            .verifyResults(new String[] {"1", "2"})
+            .run("select * from ext_table1")
+            .verifyResults(new String[] {"3", "4"});
+
+    clause.addAll(getHdfsNameserviceClause());
+    primary.run("use " + primaryDbName)
+            .run("insert into table1 values (5)")
+            .run("insert into ext_table1 values (6)")
+            .dump(primaryDbName, clause);
+    try{
+      replica.load(replicatedDbName, primaryDbName, clause);
+      Assert.fail("Expected the UnknownHostException to be thrown.");
+    } catch (IllegalArgumentException ex) {
+      assertTrue(ex.getMessage().contains("java.net.UnknownHostException: 
nsRemote"));
+    }
+  }
+
+  @Test
+  public void testHdfsNamespaceWithDataCopy() throws Throwable {

Review comment:
       nameservice




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 487986)
    Time Spent: 50m  (was: 40m)

> Handle _files creation for HA config with same nameservice name on source and 
> destination
> -----------------------------------------------------------------------------------------
>
>                 Key: HIVE-24187
>                 URL: https://issues.apache.org/jira/browse/HIVE-24187
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Pravin Sinha
>            Assignee: Pravin Sinha
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-24187.01.patch
>
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> Current HA is supported only for different nameservices on Source and 
> Destination. We need to add support of same nameservice on Source and 
> Destination.
> Local nameservice will be passed correctly to the repl command.
> Remote nameservice will be a random name and corresponding configs for the 
> same.
> Example:
> Clusters originally configured with ns for hdfs:
> src: ns1
> target : ns1
> We can denote remote name with some random name, say for example: nsRemote. 
> This is how the command will see the ns w.r.t source and target:
> Repl Dump : src: ns1, target: nsRemote
> Repl Load: src: nsRemote, target: ns1
> Entries in the _files(for managed table data loc) will be made with nsRemote 
> in stead of ns1(for src).
> Example: 
> hdfs://nsRemote/whLoc/dbName.db/table1:checksum:subDir:hdfs://nsRemote/cmroot
> Same way list of external table data locations will also be modified using 
> nsRemote in stead of ns1(for src).
> New configs can control the behavior:
> *hive.repl.ha.datapath.replace.remote.nameservice = <boolean>*
> *hive.repl.ha.datapath.replace.remote.nameservice.name = <string>*
> Based on the above configs replacement of nameservice can be done.
> This will also require that 'hive.repl.rootdir' is passed accordingly during 
> dump and load:
> Repl dump:
> ||Repl Operation||Repl Command||
> |*Staging on source cluster*|
> |Repl Dump|repl dump dbName with('hive.repl.rootdir'='hdfs://ns1/stagingLoc')|
> |Repl Load|repl load dbName into dbName 
> with('hive.repl.rootdir'='hdfs://nsRemote/stagingLoc')|
> |*Staging on target cluster*|
> |Repl Dump|repl dump dbName 
> with('hive.repl.rootdir'='hdfs://nsRemote/stagingLoc')|
> |Repl Load|repl load dbName into dbName 
> with('hive.repl.rootdir'='hdfs://ns1/stagingLoc')|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to