[GitHub] [hadoop] mukund-thakur opened a new pull request #2187: HADOOP-17167 Skipping ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename

2020-08-03 Thread GitBox


mukund-thakur opened a new pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187


   Tested by running this test only against ap-south-1 bucket configured with 
all three encryption algo.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2187: HADOOP-17167 Skipping ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename

2020-08-03 Thread GitBox


mukund-thakur commented on pull request #2187:
URL: https://github.com/apache/hadoop/pull/2187#issuecomment-668124897


   CC @steveloughran 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo commented on a change in pull request #2175: HDFS-15497. Make snapshot limit on global as well per snapshot root directory configurable

2020-08-03 Thread GitBox


szetszwo commented on a change in pull request #2175:
URL: https://github.com/apache/hadoop/pull/2175#discussion_r464571396



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##
@@ -99,13 +99,15 @@
 
   private boolean allowNestedSnapshots = false;
   private int snapshotCounter = 0;
+  private final int maxSnapshotLimitPerDirectory;

Review comment:
   Let's keep using maxSnapshotLimit for per directory limit and add a new 
variable for the new filesystem limit so that it is consistent with the conf.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -498,9 +498,14 @@
   public static final int
   DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT = 1000;
 
-  public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
-  "dfs.namenode.snapshot.max.limit";
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_MAX_LIMIT = "dfs.namenode.snapshot.max.limit";

Review comment:
   Yes, we cannot change config names.
   
   BTW, please revert the white space change so that it will be easier to back 
port.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -498,9 +498,14 @@
   public static final int
   DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT = 1000;
 
-  public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
-  "dfs.namenode.snapshot.max.limit";
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_MAX_LIMIT = "dfs.namenode.snapshot.max.limit";
   public static final int DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT = 65536;
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_GLOBAL_LIMIT = 
"dfs.namenode.snapshot.global.limit";

Review comment:
   Let's call this filesystem.limit?  Since there are multiple datacenter 
settings, "global" may be confusing. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2175: HDFS-15497. Make snapshot limit on global as well per snapshot root directory configurable

2020-08-03 Thread GitBox


bshashikant commented on a change in pull request #2175:
URL: https://github.com/apache/hadoop/pull/2175#discussion_r464581693



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -498,9 +498,14 @@
   public static final int
   DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT = 1000;
 
-  public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
-  "dfs.namenode.snapshot.max.limit";
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_MAX_LIMIT = "dfs.namenode.snapshot.max.limit";

Review comment:
   sure

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##
@@ -99,13 +99,15 @@
 
   private boolean allowNestedSnapshots = false;
   private int snapshotCounter = 0;
+  private final int maxSnapshotLimitPerDirectory;

Review comment:
   sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka commented on pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#issuecomment-667922145


   Hi @kevinzhao1661 , would you file a new issue in the ASF JIRA?
   
   > Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swamirishi commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


swamirishi commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464329525



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -160,10 +160,10 @@ public void testPreserveStatus() throws IOException {
   context.setTargetPathExists(false);
 
   CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
-  Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
+  Path listingFile = new Path("/tmp1/" + rand.nextLong());
   listing.buildListing(listingFile, context);
 
-  conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
+  conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

Review comment:
   This test case is not the case of Atomic Copy or non Atomic Copy. You 
are right about that, the value of final Path & work path would be same in case 
of non atomic copy . But as you see above we are setting the environment for 
testing the preserve status functionality & not doing a copy. It solely depends 
on the configuration value of the final Path & not the work path. This is the 
bug I raised & fixed the test case pertaining to this along with it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread Swaminathan Balachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169892#comment-17169892
 ] 

Swaminathan Balachandran commented on HADOOP-17122:
---

[~mukund-thakur] I have replied on the git thread. Kindly look into this.

> Bug in preserving Directory Attributes in DistCp with Atomic Copy
> -
>
> Key: HADOOP-17122
> URL: https://issues.apache.org/jira/browse/HADOOP-17122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Swaminathan Balachandran
>Priority: Major
> Attachments: HADOOP-17122.001.patch, Screenshot 2020-07-11 at 
> 10.26.30 AM.png
>
>
> Description:
> In case of Atomic Copy the copied data is commited and post that the preserve 
> directory attributes runs. Preserving directory attributes is done over work 
> path and not final path. I have fixed the base directory to point towards 
> final path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swamirishi commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


swamirishi commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464388213



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -160,10 +160,10 @@ public void testPreserveStatus() throws IOException {
   context.setTargetPathExists(false);
 
   CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
-  Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
+  Path listingFile = new Path("/tmp1/" + rand.nextLong());
   listing.buildListing(listingFile, context);
 
-  conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
+  conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

Review comment:
   @mukund-thakur  Can this code be merged? Or is there anything I have to 
do from my end.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a change in pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka commented on a change in pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#discussion_r464302520



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
##
@@ -238,7 +238,7 @@ Usage:
 | -getGroups [username] | Get groups the specified user belongs to. |
 | -addToClusterNodeLabels 
<"label1(exclusive=true),label2(exclusive=false),label3"> | Add to cluster node 
labels. Default exclusivity is true. |
 | -removeFromClusterNodeLabels  (label splitted by ",") 
| Remove from cluster node labels. |
-| -replaceLabelsOnNode <"node1[:port]=label1,label2 
node2[:port]=label1,label2"> [-failOnUnknownNodes]| Replace labels on nodes 
(please note that we do not support specifying multiple labels on a single host 
for now.) -failOnUnknownNodes is optional, when we set this option, it will 
fail if specified nodes are unknown.|
+| -replaceLabelsOnNode <"node1[:port]=label1 node2[:port]=label2"> 
[-failOnUnknownNodes]| Replace labels on nodes (please note that we do not 
support specifying multiple labels on a single host for now.) 
-failOnUnknownNodes is optional, when we set this option, it will fail if 
specified nodes are unknown.|

Review comment:
   Would you remove `(please note that we do not support specifying 
multiple labels on a single host for now.)` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


mukund-thakur commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464386889



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -160,10 +160,10 @@ public void testPreserveStatus() throws IOException {
   context.setTargetPathExists(false);
 
   CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
-  Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
+  Path listingFile = new Path("/tmp1/" + rand.nextLong());
   listing.buildListing(listingFile, context);
 
-  conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
+  conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

Review comment:
   Thanks got it. Code was wrong as well as the testcase was wrong. Thanks 
for fixing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


mukund-thakur commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464288595



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -160,10 +160,10 @@ public void testPreserveStatus() throws IOException {
   context.setTargetPathExists(false);
 
   CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
-  Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
+  Path listingFile = new Path("/tmp1/" + rand.nextLong());
   listing.buildListing(listingFile, context);
 
-  conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
+  conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

Review comment:
   Why we changed the path here ?? Assuming this test is for non-atomic, 
values of CONF_LABEL_TARGET_FINAL_PATH and CONF_LABEL_TARGET_WORK_PATH will be 
same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2175: HDFS-15497. Make snapshot limit on global as well per snapshot root directory configurable

2020-08-03 Thread GitBox


bshashikant commented on a change in pull request #2175:
URL: https://github.com/apache/hadoop/pull/2175#discussion_r464420469



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##
@@ -117,23 +119,37 @@ public SnapshotManager(final Configuration conf, final 
FSDirectory fsdir) {
 DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DIFF_ALLOW_SNAP_ROOT_DESCENDANT,
 DFSConfigKeys.
 DFS_NAMENODE_SNAPSHOT_DIFF_ALLOW_SNAP_ROOT_DESCENDANT_DEFAULT);
+this.maxSnapshotLimitPerDirectory = conf.getInt(
+DFSConfigKeys.
+DFS_NAMENODE_SNAPSHOT_MAX_LIMIT,
+DFSConfigKeys.
+DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT);
 this.maxSnapshotLimit = conf.getInt(
-DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_MAX_LIMIT,
-DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT);
+DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_GLOBAL_LIMIT,
+DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_GLOBAL_LIMIT_DEFAULT);
 LOG.info("Loaded config captureOpenFiles: " + captureOpenFiles
 + ", skipCaptureAccessTimeOnlyChange: "
 + skipCaptureAccessTimeOnlyChange
 + ", snapshotDiffAllowSnapRootDescendant: "
 + snapshotDiffAllowSnapRootDescendant
 + ", maxSnapshotLimit: "
-+ maxSnapshotLimit);
++ maxSnapshotLimit

Review comment:
   maxSnapshotLimitPerDirectory is already printed in the line below. I 
hope this works





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-03 Thread GitBox


hadoop-yetus commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-667912969


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 18s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 40s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 37s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  16m 37s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 45s |  root: The patch generated 4 new 
+ 92 unchanged - 1 fixed = 96 total (was 93)  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   5m 55s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 26s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  95m 25s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 266m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.viewfs.TestViewFileSystemLinkRegex |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2185 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f1c0e7c98309 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c40cbc57fa2 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/1/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/1/testReport/ |
   | 

[GitHub] [hadoop] aajisaka commented on a change in pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka commented on a change in pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#discussion_r464302520



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
##
@@ -238,7 +238,7 @@ Usage:
 | -getGroups [username] | Get groups the specified user belongs to. |
 | -addToClusterNodeLabels 
<"label1(exclusive=true),label2(exclusive=false),label3"> | Add to cluster node 
labels. Default exclusivity is true. |
 | -removeFromClusterNodeLabels  (label splitted by ",") 
| Remove from cluster node labels. |
-| -replaceLabelsOnNode <"node1[:port]=label1,label2 
node2[:port]=label1,label2"> [-failOnUnknownNodes]| Replace labels on nodes 
(please note that we do not support specifying multiple labels on a single host 
for now.) -failOnUnknownNodes is optional, when we set this option, it will 
fail if specified nodes are unknown.|
+| -replaceLabelsOnNode <"node1[:port]=label1 node2[:port]=label2"> 
[-failOnUnknownNodes]| Replace labels on nodes (please note that we do not 
support specifying multiple labels on a single host for now.) 
-failOnUnknownNodes is optional, when we set this option, it will fail if 
specified nodes are unknown.|

Review comment:
   Would you remove `(please note that we do not support specifying 
multiple labels on a single host for now.)` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kevinzhao1661 commented on pull request #2177: YARN-10383. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


kevinzhao1661 commented on pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#issuecomment-667949459


   @aajisaka 
   A new issue for this PR.YARN-10383



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhouyuan commented on pull request #1947: HDFS-14950. fix missing libhdfspp lib in dist-package

2020-08-03 Thread GitBox


zhouyuan commented on pull request #1947:
URL: https://github.com/apache/hadoop/pull/1947#issuecomment-667989252


   @aajisaka thanks for the help!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


mukund-thakur commented on pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#issuecomment-668043087


   Looks good from my side. Though there is one checkstyle issue. Please 
address that.
   @steveloughran  will be able to merge as he only has merge access.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hangc0276 commented on a change in pull request #2182: HDFS-15505. Fix NullPointerException when call getAdditionalDatanode method with null extendedBlock parameter

2020-08-03 Thread GitBox


hangc0276 commented on a change in pull request #2182:
URL: https://github.com/apache/hadoop/pull/2182#discussion_r464437130



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
##
@@ -545,16 +545,22 @@ public LocatedBlock getAdditionalDatanode(String src, 
long fileId,
 .newBuilder()
 .setSrc(src)
 .setFileId(fileId)
-.setBlk(PBHelperClient.convert(blk))

Review comment:
   thanks for your feedback, I will add a test soon.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169869#comment-17169869
 ] 

Hadoop QA commented on HADOOP-17122:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 42 unchanged - 1 fixed = 43 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
13s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | 

[GitHub] [hadoop] aajisaka commented on pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka commented on pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#issuecomment-667930221


   >  do I need to mention a new jira?
   Yes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka edited a comment on pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka edited a comment on pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#issuecomment-667930221


   >  do I need to mention a new jira?
   
   Yes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kevinzhao1661 commented on pull request #2177: YARN-10366. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


kevinzhao1661 commented on pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177#issuecomment-667929028


   @aajisaka 
   This change is a supplement to YARN-10366, do I need to mention a new jira?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17181) ITestS3AContractUnbuffer failure -stream.read didn't return all data

2020-08-03 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17170351#comment-17170351
 ] 

Steve Loughran commented on HADOOP-17181:
-

{code}

[ERROR] 
testUnbufferOnClosedFile(org.apache.hadoop.fs.contract.s3a.ITestS3AContractUnbuffer)
  Time elapsed: 1.23 s  <<< FAILURE!
java.lang.AssertionError: failed to read expected number of bytes from stream 
expected:<1024> but was:<433>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest.validateFileContents(AbstractContractUnbufferTest.java:139)
at 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest.validateFullFileContents(AbstractContractUnbufferTest.java:132)
at 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest.testUnbufferOnClosedFile(AbstractContractUnbufferTest.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

{code}


> ITestS3AContractUnbuffer failure -stream.read didn't return all data
> 
>
> Key: HADOOP-17181
> URL: https://issues.apache.org/jira/browse/HADOOP-17181
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Seen 2x recently, failure in ITestS3AContractUnbuffer as not enough data came 
> back in the read. 
> The contract test assumes that stream.read() will return everything, but it 
> could be some buffering problem. Proposed: switch to ReadFully to see if it 
> is a quirk of the read/get or is something actually wrong with the production 
> code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17181) ITestS3AContractUnbuffer failure -stream.read didn't return all data

2020-08-03 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17181:
---

 Summary: ITestS3AContractUnbuffer failure -stream.read didn't 
return all data
 Key: HADOOP-17181
 URL: https://issues.apache.org/jira/browse/HADOOP-17181
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


Seen 2x recently, failure in ITestS3AContractUnbuffer as not enough data came 
back in the read. 

The contract test assumes that stream.read() will return everything, but it 
could be some buffering problem. Proposed: switch to ReadFully to see if it is 
a quirk of the read/get or is something actually wrong with the production code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2020-08-03 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-15891:
---
Component/s: (was: fs)
 viewfs

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: viewfs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HADOOP-15891.015.patch, HDFS-13948.001.patch, 
> HDFS-13948.002.patch, HDFS-13948.003.patch, HDFS-13948.004.patch, 
> HDFS-13948.005.patch, HDFS-13948.006.patch, HDFS-13948.007.patch, 
> HDFS-13948.008.patch, HDFS-13948.009.patch, HDFS-13948.011.patch, 
> HDFS-13948.012.patch, HDFS-13948.013.patch, HDFS-13948.014.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2148: HADOOP-17131 Moving listing to use operation callback

2020-08-03 Thread GitBox


steveloughran commented on pull request #2148:
URL: https://github.com/apache/hadoop/pull/2148#issuecomment-668216731


   yetus doesn't seem to be live right now, so we don't have a final check. But 
some of its earlier comments are still there by the look of things, especially
   
   ```
   Lines that start with ? in the ASF License  report indicate files that 
do not have an Apache license header:
!? 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2148/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ListingOperationCallbacks.java
   ```
   Can you fix that? All the checkstyles seem to have been dealt with.
   
   +1 pending the license



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-03 Thread GitBox


liuml07 commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-668274867


   This seems an interesting feature. Do we need to update the `ViewFs.md` user 
guide?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #2175: HDFS-15497. Make snapshot limit on global as well per snapshot root directory configurable

2020-08-03 Thread GitBox


bshashikant commented on a change in pull request #2175:
URL: https://github.com/apache/hadoop/pull/2175#discussion_r464587720



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
##
@@ -99,13 +99,15 @@
 
   private boolean allowNestedSnapshots = false;
   private int snapshotCounter = 0;
+  private final int maxSnapshotLimitPerDirectory;

Review comment:
   sure

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -498,9 +498,14 @@
   public static final int
   DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT = 1000;
 
-  public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
-  "dfs.namenode.snapshot.max.limit";
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_MAX_LIMIT = "dfs.namenode.snapshot.max.limit";
   public static final int DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT = 65536;
+  public static final String
+  DFS_NAMENODE_SNAPSHOT_GLOBAL_LIMIT = 
"dfs.namenode.snapshot.global.limit";

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-03 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-668288316


   @liuml07  Thanks, this is a feature adopted inside our company for almost 
two years. The code is almost as same as our internal branch except I removed 
some refactored code to make it easier to review. Seemed like the rebase caused 
some UT failures. Let me fix the UTs and update the user guide.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree

2020-08-03 Thread GitBox


JohnZZGithub commented on pull request #2185:
URL: https://github.com/apache/hadoop/pull/2185#issuecomment-668314560


   @templedf  It will be great if you could help with the review, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua opened a new pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-08-03 Thread GitBox


huangtianhua opened a new pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189


   The non-volatile memory NVDIMM is faster than SSD,
   it can be used simultaneously with RAM, DISK and SSD.
   Storing the data of HDFS on NVDIMM directly will get
   better response rate and reliability.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 opened a new pull request #2188: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-03 Thread GitBox


liuml07 opened a new pull request #2188:
URL: https://github.com/apache/hadoop/pull/2188


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2177: YARN-10383. Fix Yarn rmadmin Markdown document

2020-08-03 Thread GitBox


aajisaka merged pull request #2177:
URL: https://github.com/apache/hadoop/pull/2177


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14056) Update maven-javadoc-plugin to 2.10.4

2020-08-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14056:
---

Cherry-picked to branch-2.10 to support javadoc-no-fork goal, which is used in 
precommit jobs after HADOOP-17091.

> Update maven-javadoc-plugin to 2.10.4
> -
>
> Key: HADOOP-14056
> URL: https://issues.apache.org/jira/browse/HADOOP-14056
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14056.01.patch
>
>
> I'm seeing the following warning in OpenJDK 9.
> {noformat}
> [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc 
> ---
> [WARNING] Unable to find the javadoc version: Unrecognized version of 
> Javadoc: 'java version "9-ea"
> Java(TM) SE Runtime Environment (build 9-ea+154)
> Java HotSpot(TM) 64-Bit Server VM (build 9-ea+154, mixed mode)
> ' near index 37
> (?s).*?([0-9]+\.[0-9]+)(\.([0-9]+))?.*
>  ^
> [WARNING] Using the Java the version instead of, i.e. 0.0
> {noformat}
> Need to update this to 2.10.4. (MJAVADOC-441)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14056) Update maven-javadoc-plugin to 2.10.4

2020-08-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14056:
---
Fix Version/s: 2.10.1

> Update maven-javadoc-plugin to 2.10.4
> -
>
> Key: HADOOP-14056
> URL: https://issues.apache.org/jira/browse/HADOOP-14056
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.0.0-alpha4, 2.10.1
>
> Attachments: HADOOP-14056.01.patch
>
>
> I'm seeing the following warning in OpenJDK 9.
> {noformat}
> [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-minikdc 
> ---
> [WARNING] Unable to find the javadoc version: Unrecognized version of 
> Javadoc: 'java version "9-ea"
> Java(TM) SE Runtime Environment (build 9-ea+154)
> Java HotSpot(TM) 64-Bit Server VM (build 9-ea+154, mixed mode)
> ' near index 37
> (?s).*?([0-9]+\.[0-9]+)(\.([0-9]+))?.*
>  ^
> [WARNING] Using the Java the version instead of, i.e. 0.0
> {noformat}
> Need to update this to 2.10.4. (MJAVADOC-441)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huangtianhua closed pull request #2109: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-08-03 Thread GitBox


huangtianhua closed pull request #2109:
URL: https://github.com/apache/hadoop/pull/2109


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2175: HDFS-15497. Make snapshot limit on global as well per snapshot root directory configurable

2020-08-03 Thread GitBox


hadoop-yetus commented on pull request #2175:
URL: https://github.com/apache/hadoop/pull/2175#issuecomment-668390729


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  5s |  https://github.com/apache/hadoop/pull/2175 
does not apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2175 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2175/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17179) [JDK 11] Fix javadoc error while detecting Java API link

2020-08-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17179:
--

 Summary: [JDK 11] Fix javadoc error while detecting Java API link
 Key: HADOOP-17179
 URL: https://issues.apache.org/jira/browse/HADOOP-17179
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
(default-cli) on project hadoop-hdfs-rbf: An error has occurred in Javadoc 
report generation: 
[ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
as HTML 4.01 by using the -html4 option.
[ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
[ERROR] in a future release. To suppress this warning, please ensure that any 
HTML constructs
[ERROR] in your comments are valid in HTML5, and remove the -html4 option.
[ERROR] javadoc: error - The code being documented uses modules but the 
packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
unnamed module.
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17179) [JDK 11] Fix javadoc error in Java API link detection

2020-08-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17179:
---
Summary: [JDK 11] Fix javadoc error  in Java API link detection  (was: [JDK 
11] Fix javadoc error while detecting Java API link)

> [JDK 11] Fix javadoc error  in Java API link detection
> --
>
> Key: HADOOP-17179
> URL: https://issues.apache.org/jira/browse/HADOOP-17179
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs-rbf: An error has occurred in Javadoc 
> report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] javadoc: error - The code being documented uses modules but the 
> packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
> unnamed module.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swamirishi commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


swamirishi commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464256899



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -177,6 +177,44 @@ public void testPreserveStatus() throws IOException {
   conf.unset(DistCpConstants.CONF_LABEL_PRESERVE_STATUS);
 }
 
+  }
+  @Test
+  public void testPreserveStatusWithAtomicCommit() throws IOException {
+TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
+JobContext jobContext = new 
JobContextImpl(taskAttemptContext.getConfiguration(),
+taskAttemptContext.getTaskAttemptID().getJobID());
+Configuration conf = jobContext.getConfiguration();
+String sourceBase;
+String workBase;
+String targetBase;
+FileSystem fs = null;
+try {
+  OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
+  fs = FileSystem.get(conf);

Review comment:
   @steveloughran Can this code be merged?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17179) [JDK 11] Fix javadoc error while detecting Java API link

2020-08-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17179:
---
Parent: HADOOP-16795
Issue Type: Sub-task  (was: Bug)

> [JDK 11] Fix javadoc error while detecting Java API link
> 
>
> Key: HADOOP-17179
> URL: https://issues.apache.org/jira/browse/HADOOP-17179
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs-rbf: An error has occurred in Javadoc 
> report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] javadoc: error - The code being documented uses modules but the 
> packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
> unnamed module.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17179) [JDK 11] Fix javadoc error while detecting Java API link

2020-08-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169775#comment-17169775
 ] 

Akira Ajisaka commented on HADOOP-17179:


In HADOOP-17091, I set the source version to 8 to fix this error, however, the 
fix is not correct.

> [JDK 11] Fix javadoc error while detecting Java API link
> 
>
> Key: HADOOP-17179
> URL: https://issues.apache.org/jira/browse/HADOOP-17179
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs-rbf: An error has occurred in Javadoc 
> report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] javadoc: error - The code being documented uses modules but the 
> packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
> unnamed module.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17179) [JDK 11] Fix javadoc error in Java API link detection

2020-08-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17179:
---
Status: Patch Available  (was: Open)

> [JDK 11] Fix javadoc error  in Java API link detection
> --
>
> Key: HADOOP-17179
> URL: https://issues.apache.org/jira/browse/HADOOP-17179
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs-rbf: An error has occurred in Javadoc 
> report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] javadoc: error - The code being documented uses modules but the 
> packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
> unnamed module.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2186: HADOOP-17179. [JDK 11] Fix javadoc error while detecting Java API link

2020-08-03 Thread GitBox


aajisaka opened a new pull request #2186:
URL: https://github.com/apache/hadoop/pull/2186


   JIRA: https://issues.apache.org/jira/browse/HADOOP-17179



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swamirishi commented on a change in pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread GitBox


swamirishi commented on a change in pull request #2133:
URL: https://github.com/apache/hadoop/pull/2133#discussion_r464243407



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
##
@@ -177,6 +177,44 @@ public void testPreserveStatus() throws IOException {
   conf.unset(DistCpConstants.CONF_LABEL_PRESERVE_STATUS);
 }
 
+  }
+  @Test
+  public void testPreserveStatusWithAtomicCommit() throws IOException {
+TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
+JobContext jobContext = new 
JobContextImpl(taskAttemptContext.getConfiguration(),
+taskAttemptContext.getTaskAttemptID().getJobID());
+Configuration conf = jobContext.getConfiguration();
+String sourceBase;
+String workBase;
+String targetBase;
+FileSystem fs = null;
+try {
+  OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
+  fs = FileSystem.get(conf);

Review comment:
   @steveloughran Can you look at this PR. The test case seems sufficient 
for testing this use case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17180) S3Guard: Include 500 DynamoDB system errors in exponential backoff retries

2020-08-03 Thread David Kats (Jira)
David Kats created HADOOP-17180:
---

 Summary: S3Guard: Include 500 DynamoDB system errors in 
exponential backoff retries
 Key: HADOOP-17180
 URL: https://issues.apache.org/jira/browse/HADOOP-17180
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.3
Reporter: David Kats
 Attachments: image-2020-08-03-09-58-54-102.png

We get fatal failures from S3guard (that in turn fail our spark jobs) because 
of the inernal DynamoDB system errors.

{color:#00}com.amazonaws.services.dynamodbv2.model.InternalServerErrorException:
 Internal server error (Service: AmazonDynamoDBv2; Status Code: 500; Error 
Code: InternalServerError; Request ID: 
00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG): Internal server error 
(Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalServerError; 
Request ID: 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG){color}

{color:#00}The DynamoDB has separate statistic for system errors:{color}

{color:#00}!image-2020-08-03-09-58-54-102.png!{color}

{color:#00}I contacted the AWS Support and got an explanation that those 
500 errors are returned to the client once DynamoDB gets overwhelmed with 
client requests.{color}

{color:#00}So essentially the traffic should had been throttled but it 
didn't and got 500 system errors.{color}

{color:#00}My point is that the client should handle those errors just like 
throttling exceptions - {color}

{color:#00}with exponential backoff retries.{color}

 

{color:#00}Here is more complete exception stack trace:{color}

 

*{color:#00}org.apache.hadoop.fs.s3a.AWSServiceIOException: get on 
s3a://rem-spark/persisted_step_data/15/0afb1ccb73854f1fa55517a77ec7cc5e__b67e2221-f0e3-4c89-90ab-f49618ea4557__SDTopology/parquet.all_ranges/topo_id=321:
 com.amazonaws.services.dynamodbv2.model.InternalServerErrorException: Internal 
server error (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: 
InternalServerError; Request ID: 
00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG): Internal server error 
(Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalServerError; 
Request ID: 00EBRE6J6V8UGD7040C9DUP2MNVV4KQNSO5AEMVJF66Q9ASUAAJG) 
at{color}*{color:#00} 
org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:389) 
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:181) at 
org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111) at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:438)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2110)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1889) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1868)
 at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1868) at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.org$apache$spark$sql$execution$datasources$InMemoryFileIndex$$listLeafFiles(InMemoryFileIndex.scala:277)
 at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3$$anonfun$apply$2.apply(InMemoryFileIndex.scala:207)
 at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3$$anonfun$apply$2.apply(InMemoryFileIndex.scala:206)
 at scala.collection.immutable.Stream.map(Stream.scala:418) at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3.apply(InMemoryFileIndex.scala:206)
 at 
org.apache.spark.sql.execution.datasources.InMemoryFileIndex$$anonfun$3.apply(InMemoryFileIndex.scala:204)
 at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
 at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
org.apache.spark.scheduler.Task.run(Task.scala:123) at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at 

[jira] [Comment Edited] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread Swaminathan Balachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169819#comment-17169819
 ] 

Swaminathan Balachandran edited comment on HADOOP-17122 at 8/3/20, 8:37 AM:


[~ste...@apache.org] I see I received an LGTM. So what is the expected 
precedent here to get the code merged?


was (Author: swamirishi):
[~ste...@apache.org] I see I received an LGTM. So what is the expected 
precedent here?

> Bug in preserving Directory Attributes in DistCp with Atomic Copy
> -
>
> Key: HADOOP-17122
> URL: https://issues.apache.org/jira/browse/HADOOP-17122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Swaminathan Balachandran
>Priority: Major
> Attachments: HADOOP-17122.001.patch, Screenshot 2020-07-11 at 
> 10.26.30 AM.png
>
>
> Description:
> In case of Atomic Copy the copied data is commited and post that the preserve 
> directory attributes runs. Preserving directory attributes is done over work 
> path and not final path. I have fixed the base directory to point towards 
> final path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy

2020-08-03 Thread Swaminathan Balachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17169819#comment-17169819
 ] 

Swaminathan Balachandran commented on HADOOP-17122:
---

[~ste...@apache.org] I see I received an LGTM. So what is the expected 
precedent here?

> Bug in preserving Directory Attributes in DistCp with Atomic Copy
> -
>
> Key: HADOOP-17122
> URL: https://issues.apache.org/jira/browse/HADOOP-17122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Swaminathan Balachandran
>Priority: Major
> Attachments: HADOOP-17122.001.patch, Screenshot 2020-07-11 at 
> 10.26.30 AM.png
>
>
> Description:
> In case of Atomic Copy the copied data is commited and post that the preserve 
> directory attributes runs. Preserving directory attributes is done over work 
> path and not final path. I have fixed the base directory to point towards 
> final path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org