steveloughran commented on code in PR #5308:
URL: https://github.com/apache/hadoop/pull/5308#discussion_r1090843134


##########
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java:
##########
@@ -142,6 +142,13 @@ private DistCpConstants() {
       "distcp.blocks.per.chunk";
 
   public static final String CONF_LABEL_USE_ITERATOR = "distcp.use.iterator";
+
+  /** Distcp -update to use modification time of source and target file to
+   * check while skipping.
+   */
+  public static final String CONF_LABEL_UPDATE_MOD_TIME =

Review Comment:
   going to need docs.
   



##########
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java:
##########
@@ -857,4 +862,83 @@ public void testDistCpWithUpdateExistFile() throws 
Exception {
     verifyFileContents(localFS, dest, block);
   }
 
+  @Test
+  public void testDistCpUpdateCheckFileSkip() throws Exception {

Review Comment:
   I'm thinking of a way to test that 0 byte files don't get copied
   
   the testUpdateDeepDirectoryStructureNoChange() test shows how the counters 
are used for validation. The new test should validate the files are skipped as 
well as checking the contents.
   
   That should be usable to verify that 0 byte files are always skipped, 
something we can't do with content validation.
   
   one thing to be aware of is that this test suite isn't implemented by hdfs, 
because of the way it creates a new fs every call is too slow. there should be 
some specific hdfs to local test we should cover too.
   
   



##########
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java:
##########
@@ -857,4 +862,83 @@ public void testDistCpWithUpdateExistFile() throws 
Exception {
     verifyFileContents(localFS, dest, block);
   }
 
+  @Test
+  public void testDistCpUpdateCheckFileSkip() throws Exception {
+    describe("Distcp update to check file skips.");
+
+    Path source = new Path(remoteDir, "file");
+    Path dest = new Path(localDir, "file");
+    dest = localFS.makeQualified(dest);
+
+    // Creating a source file with certain dataset.
+    byte[] sourceBlock = dataset(10, 'a', 'z');
+
+    // Write the dataset and as well create the target path.
+    try (FSDataOutputStream out = remoteFS.create(source)) {

Review Comment:
   if you use `ContractTestUtils.writeDataset()` here the write is followed by 
the check that the file is of the correct length; L882 just looks for existence



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to