aasha commented on a change in pull request #1898:
URL: https://github.com/apache/hive/pull/1898#discussion_r563482079



##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java
##########
@@ -1707,6 +1738,50 @@ public void testHdfsNameserviceLazyCopyIncr() throws 
Throwable {
     }
   }
 
+  @Test
+  public void testHdfsNSLazyCopyIncrExtTbls() throws Throwable {

Review comment:
       both tests can be combined?

##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java
##########
@@ -2122,4 +2221,21 @@ private void setupUDFJarOnHDFS(Path 
identityUdfLocalPath, Path identityUdfHdfsPa
             + NS_REMOTE + "'");
     return withClause;
   }
+
+  /*
+   * Method used from TestReplicationScenariosExclusiveReplica
+   */
+  private void assertExternalFileInfo(List<String> expected, String 
dumplocation, boolean isIncremental,
+                                      WarehouseInstance warehouseInstance)
+          throws IOException {
+    Path hivePath = new Path(dumplocation, ReplUtils.REPL_HIVE_BASE_DIR);
+    Path metadataPath = new Path(hivePath, EximUtil.METADATA_PATH_NAME);
+    Path externalTableInfoFile;
+    if (isIncremental) {
+      externalTableInfoFile = new Path(hivePath, FILE_NAME);

Review comment:
       Is this the actual file used by code or the deprecated one?

##########
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java
##########
@@ -1668,6 +1671,34 @@ public void testHdfsNameserviceLazyCopy() throws 
Throwable {
     }
   }
 
+  @Test
+  public void testHdfsNSLazyCopyBootStrapExtTbls() throws Throwable {
+    List<String> clause = getHdfsNameserviceClause();
+    clause.add("'" + 
HiveConf.ConfVars.REPL_DUMP_METADATA_ONLY_FOR_EXTERNAL_TABLE.varname + 
"'='false'");
+    clause.add("'" + HiveConf.ConfVars.REPL_INCLUDE_EXTERNAL_TABLES.varname + 
"'='true'");

Review comment:
       this is true by default

##########
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplExternalTables.java
##########
@@ -183,6 +184,22 @@ private void dirLocationToCopy(FileList fileList, Path 
sourcePath, HiveConf conf
             throws HiveException {
       Path basePath = getExternalTableBaseDir(conf);
       Path targetPath = externalTableDataPath(conf, basePath, sourcePath);
+      //Here, when src and target are HA clusters with same NS, then 
sourcePath would have the correct host
+      //whereas the targetPath would have an host that refers to the target 
cluster. This is fine for
+      //data-copy running during dump as the correct logical locations would 
be used. But if data-copy runs during
+      //load, then the remote location needs to point to the src cluster from 
where the data would be copied and
+      //the common original NS would suffice for targetPath.
+      
if(hiveConf.getBoolVar(HiveConf.ConfVars.REPL_HA_DATAPATH_REPLACE_REMOTE_NAMESERVICE)
 &&
+              
hiveConf.getBoolVar(HiveConf.ConfVars.REPL_RUN_DATA_COPY_TASKS_ON_TARGET)) {
+        String remoteNS = 
hiveConf.get(HiveConf.ConfVars.REPL_HA_DATAPATH_REPLACE_REMOTE_NAMESERVICE_NAME.varname);
+        if (StringUtils.isEmpty(remoteNS)) {
+          throw new SemanticException(ErrorMsg.REPL_INVALID_CONFIG_FOR_SERVICE

Review comment:
       non recoverable error?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to