[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-584480385
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 40s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed  |
   | -1 :x: |  javac  |   1m 15s |  hadoop-hdfs-project_hadoop-hdfs generated 6 
new + 580 unchanged - 0 fixed = 586 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 52s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 55 new + 245 unchanged - 0 fixed = 300 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 52s |  hadoop-hdfs-project_hadoop-hdfs generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   4m 21s |  hadoop-hdfs-project/hadoop-hdfs 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 126m 12s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 216m 38s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext
 defines equals and uses Object.hashCode()  At 
INodeAttributeProvider.java:Object.hashCode()  At 
INodeAttributeProvider.java:[lines 234-237] |
   | Failed junit tests | 
hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 20fcfe1c7a85 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 

[jira] [Created] (HADOOP-16850) Support getting thread info from thread group for JvmMetrics to improve the performance

2020-02-10 Thread Tao Yang (Jira)
Tao Yang created HADOOP-16850:
-

 Summary: Support getting thread info from thread group for 
JvmMetrics to improve the performance
 Key: HADOOP-16850
 URL: https://issues.apache.org/jira/browse/HADOOP-16850
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.8.6, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1
Reporter: Tao Yang


Recently we found jmx request taken almost 5s+ to be done when there were 1w+ 
threads in a stressed datanode process, meanwhile other http requests were 
blocked and some disk operations were affected (we can see many "Slow 
manageWriterOsCache" messages in DN log, and these messages were hard to be 
seen again after we stopped sending jxm requests)

The excessive time is spent in getting thread info via ThreadMXBean inside 
which ThreadImpl#getThreadInfo native method is called, the time complexity of 
ThreadImpl#getThreadInfo is O(n*n) according to JDK-8185005 and it may held 
global thread lock (prevent creation or termination of threads) for a long time.

To improve this, I propose to support getting thread info from thread group 
which will improve a lot by default, also support using original approach when 
"-Dhadoop.metrics.jvm.use-thread-mxbean=true" is configured in the startup 
command.

An example of performance tests between these two approaches is as follows:
{noformat}
#Threads=100, ThreadMXBean=382372 ns, ThreadGroup=72046 ns, ratio: 5
#Threads=200, ThreadMXBean=776619 ns, ThreadGroup=83875 ns, ratio: 9
#Threads=500, ThreadMXBean=3392954 ns, ThreadGroup=216269 ns, ratio: 15
#Threads=1000, ThreadMXBean=9475768 ns, ThreadGroup=220447 ns, ratio: 42
#Threads=2000, ThreadMXBean=53833729 ns, ThreadGroup=579608 ns, ratio: 92
#Threads=3000, ThreadMXBean=196829971 ns, ThreadGroup=1157670 ns, ratio: 170
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance 
INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support 
Authorization of mkdir, rm, rmdir, copy, move etc...
URL: https://github.com/apache/hadoop/pull/1829#issuecomment-584453114
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 42s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | -1 :x: |  javac  |   1m  2s |  hadoop-hdfs-project_hadoop-hdfs generated 8 
new + 580 unchanged - 0 fixed = 588 total (was 580)  |
   | -0 :warning: |  checkstyle  |   0m 45s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 60 new + 245 unchanged - 0 fixed = 305 total (was 245)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 41s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs-project_hadoop-hdfs generated 
1 new + 100 unchanged - 0 fixed = 101 total (was 100)  |
   | -1 :x: |  findbugs  |   3m 32s |  hadoop-hdfs-project/hadoop-hdfs 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  89m  6s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 162m 51s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  
org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider$AuthorizationContext
 defines equals and uses Object.hashCode()  At 
INodeAttributeProvider.java:Object.hashCode()  At 
INodeAttributeProvider.java:[lines 234-237] |
   | Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6a227c3f2b1c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/testReport/ |
   | Max. process+thread count | 3232 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 

[GitHub] [hadoop] hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584323280
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 17s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 55s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 29s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 12s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 52s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 53s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 29s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 12s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 28s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 133m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux c016059eec4a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/testReport/ |
   | Max. process+thread count | 1383 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys 
not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#issuecomment-581467919
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 13s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 29s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-aws: The 
patch generated 0 new + 13 unchanged - 1 fixed = 13 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  55m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1823 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 48283c2c2842 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1e3a0b0 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/4/testReport/ |
   | Max. process+thread count | 454 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys 
not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#issuecomment-581439423
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 30s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 58s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  85m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1823 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 69568f12de59 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1e3a0b0 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/3/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys not propagating correctly during copy operation

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1823: HADOOP-16794 S3 Encryption keys 
not propagating correctly during copy operation
URL: https://github.com/apache/hadoop/pull/1823#issuecomment-580732492
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 13 unchanged - 1 fixed = 14 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 22s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  56m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1823 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 48dc3bcd243b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / bf8686f |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/2/testReport/ |
   | Max. process+thread count | 450 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1823/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject requests are their own Thundering Herd

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1826: HADOOP-16823. Large DeleteObject 
requests are their own Thundering Herd
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584280267
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 57s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 31s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 55s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 38s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 39s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 44s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |  19m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 43s |  root: The patch generated 12 new 
+ 75 unchanged - 2 fixed = 87 total (was 77)  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 23s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 35s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 153m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux dc89e8d171fe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/testReport/ |
   | Max. process+thread count | 1597 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1836: HADOOP-16646. Backport S3A enhancements and fixes from trunk to branch-3.2

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1836: HADOOP-16646. Backport S3A 
enhancements and fixes from trunk to branch-3.2
URL: https://github.com/apache/hadoop/pull/1836#issuecomment-583596821
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
9 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 56s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  17m 22s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  17m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 52s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 58s |  root: The patch generated 2 new 
+ 32 unchanged - 0 fixed = 34 total (was 32)  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  2s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   4m 45s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 127m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1836 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux ec3eb7e260b8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-3.2 / aca9304 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/testReport/ |
   | Max. process+thread count | 1598 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1836/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-10 Thread GitBox
goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to 
the Router
URL: https://github.com/apache/hadoop/pull/1832#discussion_r377216849
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterFsckServlet.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.security.PrivilegedExceptionAction;
+import java.util.Map;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.common.JspHelper;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * This class is used in Namesystem's web server to do fsck on namenode.
+ */
+@InterfaceAudience.Private
+public class RouterFsckServlet extends HttpServlet {
+  /** for java.io.Serializable. */
+  private static final long serialVersionUID = 1L;
+
+  public static final String SERVLET_NAME = "fsck";
+  public static final String PATH_SPEC = "/fsck";
+
+  /** Handle fsck request. */
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final Map pmap = request.getParameterMap();
+final PrintWriter out = response.getWriter();
+final InetAddress remoteAddress =
+InetAddress.getByName(request.getRemoteAddr());
+final ServletContext context = getServletContext();
+final Configuration conf = RouterHttpServer.getConfFromContext(context);
+final UserGroupInformation ugi = getUGI(request, conf);
+try {
+  ugi.doAs((PrivilegedExceptionAction) () -> {
+Router router = RouterHttpServer.getRouterFromContext(context);
+new RouterFsck(router, pmap, out, remoteAddress).fsck();
+return null;
+  });
+} catch (InterruptedException e) {
+  response.sendError(400, e.getMessage());
 
 Review comment:
   HttpURLConnection.HTTP_BAD_REQUEST in java.net.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-10 Thread GitBox
goiri commented on a change in pull request #1832: HDFS-13989. RBF: Add FSCK to 
the Router
URL: https://github.com/apache/hadoop/pull/1832#discussion_r377217468
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterFsckServlet.java
 ##
 @@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.security.PrivilegedExceptionAction;
+import java.util.Map;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.common.JspHelper;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * This class is used in Namesystem's web server to do fsck on namenode.
+ */
+@InterfaceAudience.Private
+public class RouterFsckServlet extends HttpServlet {
+  /** for java.io.Serializable. */
+  private static final long serialVersionUID = 1L;
+
+  public static final String SERVLET_NAME = "fsck";
+  public static final String PATH_SPEC = "/fsck";
+
+  /** Handle fsck request. */
+  @Override
+  public void doGet(HttpServletRequest request, HttpServletResponse response)
+  throws IOException {
+final Map pmap = request.getParameterMap();
+final PrintWriter out = response.getWriter();
+final InetAddress remoteAddress =
+InetAddress.getByName(request.getRemoteAddr());
+final ServletContext context = getServletContext();
+final Configuration conf = RouterHttpServer.getConfFromContext(context);
+final UserGroupInformation ugi = getUGI(request, conf);
+try {
+  ugi.doAs((PrivilegedExceptionAction) () -> {
+Router router = RouterHttpServer.getRouterFromContext(context);
+new RouterFsck(router, pmap, out, remoteAddress).fsck();
+return null;
+  });
+} catch (InterruptedException e) {
+  response.sendError(400, e.getMessage());
 
 Review comment:
   Or HttpStatus.SC_BAD_REQUEST if you want to follow the same as the unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16732) S3Guard to support encrypted DynamoDB table

2020-02-10 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-16732:
---
Release Note: 
Support server-side encrypted DynamoDB table for S3Guard. Users don't need to 
do anything (provide any configuration or change application code) if they 
don't want to enable server side encryption. Existing tables and the default 
configuration values will keep existing behavior, which is encrypted using 
Amazon owned customer master key (CMK).

To enable server side encryption, users can set 
"fs.s3a.s3guard.ddb.table.sse.enabled" as true. This uses Amazon managed CMK 
"alias/aws/dynamodb". When it's enabled, a user can also specify her own custom 
KMS CMK with config "fs.s3a.s3guard.ddb.table.sse.cmk".

Adding release notes.

> S3Guard to support encrypted DynamoDB table
> ---
>
> Key: HADOOP-16732
> URL: https://issues.apache.org/jira/browse/HADOOP-16732
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.3.0
>
>
> S3Guard is not yet supporting [encrypted DynamoDB 
> table|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.html].
>  We can provide an option to enable encrypted DynamoDB table so data at rest 
> could be encrypted. S3Guard data in DynamoDB usually is not sensitive since 
> it's the S3 namespace mirroring, but some times even this is a concern. By 
> default it's not enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16823) Large DeleteObject requests are their own Thundering Herd

2020-02-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16823:

Release Note: 
The page size for bulk delete operations has been reduced from 1000 to 250 to 
reduce the likelihood of overloading an S3 partition, especially because the 
retry policy on throttling is simply to try again.

The page size can be set in  "fs.s3a.bulk.delete.page.size"

There is also an option to control whether or not the AWS client retries 
requests, or whether it is handled exclusively in the S3A code. This option 
"fs.s3a.experimental.aws.s3.throttling" is true by default. If set to false: 
everything is handled in the S3A client. While this means that metrics may be 
more accurate, it may mean that throttling failures in helper threads of the 
AWS SDK (especially those used in copy/rename) may not be handled properly. 
This is experimental, and should be left at "true" except when seeking more 
detail about throttling rates.



  was:
The AWS SDK client no longer handles 503/slow down messages from S3 with its 
own internall retry mechanism; these throttling messages are handled purely in 
the S3A client, which updates its counters/metrics before performing its own 
backoff/retry strategy,

The values of "fs.s3a.retry.throttle.interval" and  
"fs.s3a.retry.throttle.limit" have been set to compensate for the fact that the 
SDK will not be retrying internally: the values are 500ms and 20 respectively.

If you have explicitly set these values. Make them larger. The default values 
for the AWS SDK are defined in  com.amazonaws.retry.PredefinedRetryPolicies; 
currently defined as 500ms based delay + exponential/jittered backoff to 20 
seconds, which is about 4-5 attempts. The S3A throttle limit has been increased 
from 10 to 20 to (over) compensate. The S3A retry policy's Jitter is slightly 
randomised so that multiple threads encountering throttling situations Will not 
all sleep for exactly the same time. The AWS jitter seems a bit more 
deterministic.

You can now inspect the metrics/statistics for a filesystem and know that this 
counts the number of retries.
All other connections to S3 (especially DynamoDB) are still retried within the 
AWS clients with the S3A code wrapping these.

If you are curious, consult PredefinedRetryPolicies to see what the internal 
default backoff/retry policies are for S3, DynamoDB (S3Guard), etc.



> Large DeleteObject requests are their own Thundering Herd
> -
>
> Key: HADOOP-16823
> URL: https://issues.apache.org/jira/browse/HADOOP-16823
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Currently AWS S3 throttling is initially handled in the AWS SDK, only 
> reaching the S3 client code after it has given up.
> This means we don't always directly observe when throttling is taking place.
> Proposed:
> * disable throttling retries in the AWS client library
> * add a quantile for the S3 throttle events, as DDB has
> * isolate counters of s3 and DDB throttle events to classify issues better
> Because we are taking over the AWS retries, we will need to expand the 
> initial delay en retries and the number of retries we should support before 
> giving up.
> Also: should we log throttling events? It could be useful but there is a risk 
> of logs overloading especially if many threads in the same process were 
> triggering the problem.
> Proposed: log at debug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16823) Large DeleteObject requests are their own Thundering Herd

2020-02-10 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16823:

Summary: Large DeleteObject requests are their own Thundering Herd  (was: 
Manage S3 Throttling exclusively in S3A client)

> Large DeleteObject requests are their own Thundering Herd
> -
>
> Key: HADOOP-16823
> URL: https://issues.apache.org/jira/browse/HADOOP-16823
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Currently AWS S3 throttling is initially handled in the AWS SDK, only 
> reaching the S3 client code after it has given up.
> This means we don't always directly observe when throttling is taking place.
> Proposed:
> * disable throttling retries in the AWS client library
> * add a quantile for the S3 throttle events, as DDB has
> * isolate counters of s3 and DDB throttle events to classify issues better
> Because we are taking over the AWS retries, we will need to expand the 
> initial delay en retries and the number of retries we should support before 
> giving up.
> Also: should we log throttling events? It could be useful but there is a risk 
> of logs overloading especially if many threads in the same process were 
> triggering the problem.
> Proposed: log at debug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Improve S3A Throttling in S3Guard and S3 bulk delete operations

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1826: HADOOP-16823. Improve S3A 
Throttling in S3Guard and S3 bulk delete operations
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-582521724
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 37s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  3s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 12s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 35s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 51s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 51s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 37s |  root: The patch generated 16 new 
+ 63 unchanged - 2 fixed = 79 total (was 65)  |
   | +1 :green_heart: |  mvnsite  |   2m 15s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  12m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 15s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 43s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 119m 21s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1826 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 4aa192819c23 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce7b8b5 |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/6/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/6/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/6/testReport/ |
   | Max. process+thread count | 1366 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1826/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2020-02-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17033751#comment-17033751
 ] 

Steve Loughran commented on HADOOP-13811:
-

HADOOP-16823 shows we get this under load of large DeleteObjects requests; we 
are treating that as a throttle event to retry

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Priority: Major
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling 
exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584220746
 
 
   (retested against s3 ireland, got the failure in 
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)
 which is from my auth mode patch against versioned buckets -will do a quick 
followup for that patch)
   
-Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo  -Dauth
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on issue #1826: HADOOP-16823. Manage S3 Throttling 
exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584219900
 
 
   Gabor, thanks for the review.
   
   yeah, you are right. Improvement.
   
   Before I merge, do you want to look at `BulkDeleteRetryHandler` and see if 
you agree what I'm doing there? 
   
   XML parser errors are being treated as retry failures, as that is what I'm 
seeing during the load tests (i.e. not 503/slow down). 
https://issues.apache.org/jira/browse/HADOOP-13811 shows the history there (and 
yes, my test bucket is versioned for 24h).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage 
S3 Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377178960
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -174,10 +174,34 @@ private Constants() {
   public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain";
   public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation";
 
-  // number of times we should retry errors
+  /**
+   * Number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final String MAX_ERROR_RETRIES = "fs.s3a.attempts.maximum";
+
+  /**
+   * Default number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final int DEFAULT_MAX_ERROR_RETRIES = 10;
 
+  /**
+   * Experimental/Unstable feature: should the AWS client library retry
+   * throttle responses before escalating to the S3A code: {@value}.
+   *
+   * When set to false, the S3A connector sees all S3 throttle events,
+   * And so can update it counters and the metrics, and use its own retry
+   * policy.
+   * However, this may have adverse effects on some operations where the S3A
+   * code cannot retry as efficiently as the AWS client library.
+   *
+   * This only applies to S3 operations, not to DynamoDB or other services.
+   */
+  @InterfaceStability.Unstable
+  public static final String EXPERIMENTAL_AWS_INTERNAL_THROTTLING =
+  "fs.s3a.experimental.aws.internal.throttling";
 
 Review comment:
   its true, but yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage 
S3 Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377181855
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 ##
 @@ -56,7 +57,13 @@ public AmazonS3 createS3Client(URI name,
   final String userAgentSuffix) throws IOException {
 Configuration conf = getConf();
 final ClientConfiguration awsConf = S3AUtils
-.createAwsConf(getConf(), bucket, Constants.AWS_SERVICE_IDENTIFIER_S3);
+.createAwsConf(conf, bucket, Constants.AWS_SERVICE_IDENTIFIER_S3);
+
+// throttling is explicitly disabled on the S3 client so that
+// all failures are collected
+awsConf.setUseThrottleRetries(
+conf.getBoolean(EXPERIMENTAL_AWS_INTERNAL_THROTTLING, true));
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage 
S3 Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377181603
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -174,10 +174,34 @@ private Constants() {
   public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain";
   public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation";
 
-  // number of times we should retry errors
+  /**
+   * Number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final String MAX_ERROR_RETRIES = "fs.s3a.attempts.maximum";
+
+  /**
+   * Default number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final int DEFAULT_MAX_ERROR_RETRIES = 10;
 
+  /**
+   * Experimental/Unstable feature: should the AWS client library retry
+   * throttle responses before escalating to the S3A code: {@value}.
+   *
+   * When set to false, the S3A connector sees all S3 throttle events,
+   * And so can update it counters and the metrics, and use its own retry
+   * policy.
+   * However, this may have adverse effects on some operations where the S3A
+   * code cannot retry as efficiently as the AWS client library.
+   *
+   * This only applies to S3 operations, not to DynamoDB or other services.
+   */
+  @InterfaceStability.Unstable
+  public static final String EXPERIMENTAL_AWS_INTERNAL_THROTTLING =
+  "fs.s3a.experimental.aws.internal.throttling";
 
 Review comment:
   Updating the value, also  changing the name to 
""fs.s3a.experimental.aws.s3.throttling" to make clear its s3 only


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1826: HADOOP-16823. Manage 
S3 Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377178960
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -174,10 +174,34 @@ private Constants() {
   public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain";
   public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation";
 
-  // number of times we should retry errors
+  /**
+   * Number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final String MAX_ERROR_RETRIES = "fs.s3a.attempts.maximum";
+
+  /**
+   * Default number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final int DEFAULT_MAX_ERROR_RETRIES = 10;
 
+  /**
+   * Experimental/Unstable feature: should the AWS client library retry
+   * throttle responses before escalating to the S3A code: {@value}.
+   *
+   * When set to false, the S3A connector sees all S3 throttle events,
+   * And so can update it counters and the metrics, and use its own retry
+   * policy.
+   * However, this may have adverse effects on some operations where the S3A
+   * code cannot retry as efficiently as the AWS client library.
+   *
+   * This only applies to S3 operations, not to DynamoDB or other services.
+   */
+  @InterfaceStability.Unstable
+  public static final String EXPERIMENTAL_AWS_INTERNAL_THROTTLING =
+  "fs.s3a.experimental.aws.internal.throttling";
 
 Review comment:
   its false, but yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377166880
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
+int bucketProbe = this.getConf()
+.getInt(S3A_BUCKET_PROBE, S3A_BUCKET_PROBE_DEFAULT);
+Preconditions.checkArgument(bucketProbe >= 0 && bucketProbe <= 2,
+"Value of " + S3A_BUCKET_PROBE + " should be between 0 to 2");
+switch (bucketProbe) {
+case 0:
+  break;
+case 1:
+  verifyBucketExists();
+  break;
+case 2:
+  verifyBucketExistsV2();
+  break;
+default:
+  break;
 
 Review comment:
   Add a comment here saying this won't be reached because of the checks above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377170722
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
+}
+assertNotNull("FileSystem should have been initialized", fs);
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
 
 Review comment:
   What's the full stack here? Because I don't want the FNFE from a missing 
path to be confused with a getFileStatus failure, as that could go on to 
confuse other things


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377164858
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
 
 Review comment:
   even though I support this spelling, I'm afraid we need to use the US one. 
That avoids us having field bug reports about misspelt words


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377168238
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -452,6 +450,33 @@ public void initialize(URI name, Configuration 
originalConf)
 
   }
 
+  /**
+   * Test bucket existence in S3.
+   * When value of {@link Constants#S3A_BUCKET_PROBE is set to 0 by client,
+   * bucket existence check is not done to improve performance of
+   * S3AFileSystem initialisation. When set to 1 or 2, bucket existence check
+   * will be performed which is potentially slow.
+   * @throws IOException
+   */
+  private void doBucketProbing() throws IOException {
 
 Review comment:
   add a RetryPolicy by looking @ the methods it calls and see what they do
   
   We mustn't have a retry() calling operations which retry themselves anyway, 
as it explodes the #of retries which take place. It's ok to use once() round 
either of them, as the exception translating stuff is a no-op on already 
translated exceptions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
hadoop-yetus removed a comment on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-583830785
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 58s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m  6s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 22s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 26s |  hadoop-tools/hadoop-aws: The 
patch generated 22 new + 15 unchanged - 0 fixed = 37 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 5 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  17m 52s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  the patch passed  |
   | -1 :x: |  findbugs  |   1m 22s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 34s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  78m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Switch statement found in 
org.apache.hadoop.fs.s3a.S3AFileSystem.doBucketProbing() where default case is 
missing  At S3AFileSystem.java:where default case is missing  At 
S3AFileSystem.java:[lines 463-470] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux d8622c0ac2b8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d23317b |
   | Default Java | 1.8.0_232 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377172646
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
 
 Review comment:
   You get a problem in this code because FileSystem.cache() will cache on the 
URI only; if there's an FS in the cache, your settings aren't picked up -you 
will always get the previous instance. That often causes intermittent problems 
with test runs.
   
   1. Use S3ATestUtils.disableFilesystemCaching to turn off caching of the 
filesystems you get via FileSystem.get
   2. and close() them at the end of each test case. You can do with with 
try/finally or a try-with-resources clause
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377170084
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
+}
+assertNotNull("FileSystem should have been initialized", fs);
+
+Path path = new Path(uri);
+intercept(FileNotFoundException.class,
+() -> fs.getFileStatus(path));
+
+Path src = new Path(fs.getUri() + "/testfile");
+byte[] data = dataset(1024, 'a', 'z');
+intercept(FileNotFoundException.class,
+() -> writeDataset(fs, src, data, data.length, 1024 * 1024, true));
+  }
+
+  @Test
+  public void testBucketProbingV1() throws Exception {
+Configuration configuration = this.getConfiguration();
 
 Review comment:
   no need for `this.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377169124
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
 
 Review comment:
   don't do this...just have the exception thrown all the way up


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377169421
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
+
+  private final String randomBucket =
+  "random-bucket-" + UUID.randomUUID().toString();
+
+  private final URI uri = URI.create(FS_S3A + "://" + randomBucket);
+
+  @Test
+  public void testNoBucketProbing() throws Exception {
+Configuration configuration = this.getConfiguration();
+configuration.setInt(S3A_BUCKET_PROBE, 0);
+try {
+  fs = FileSystem.get(uri, configuration);
+} catch (IOException ex) {
+  LOG.error("Exception : ", ex);
+  fail("Exception shouldn't have occurred");
+}
+assertNotNull("FileSystem should have been initialized", fs);
 
 Review comment:
   and no need to worry about this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
steveloughran commented on a change in pull request #1838: HADOOP-16711 Add way 
to skip verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#discussion_r377173220
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
 ##
 @@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A;
+import static org.apache.hadoop.fs.s3a.Constants.S3A_BUCKET_PROBE;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Class to test bucket existence api.
+ * See {@link S3AFileSystem#doBucketProbing()}.
+ */
+public class ITestS3ABucketExistence extends AbstractS3ATestBase {
+
+  private FileSystem fs;
 
 Review comment:
   unless you want to share across test cases (you don't) or want to have 
cleanup in the teardown code, move this into a local variable in each test case


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
bgaborg commented on issue #1826: HADOOP-16823. Manage S3 Throttling 
exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#issuecomment-584100455
 
 
   (also this is not a bug imho, more like an improvement)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
bgaborg commented on a change in pull request #1826: HADOOP-16823. Manage S3 
Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377018136
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -174,10 +174,34 @@ private Constants() {
   public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain";
   public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation";
 
-  // number of times we should retry errors
+  /**
+   * Number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final String MAX_ERROR_RETRIES = "fs.s3a.attempts.maximum";
+
+  /**
+   * Default number of times the AWS client library should retry errors before
+   * escalating to the S3A code: {@value}.
+   */
   public static final int DEFAULT_MAX_ERROR_RETRIES = 10;
 
+  /**
+   * Experimental/Unstable feature: should the AWS client library retry
+   * throttle responses before escalating to the S3A code: {@value}.
+   *
+   * When set to false, the S3A connector sees all S3 throttle events,
+   * And so can update it counters and the metrics, and use its own retry
+   * policy.
+   * However, this may have adverse effects on some operations where the S3A
+   * code cannot retry as efficiently as the AWS client library.
+   *
+   * This only applies to S3 operations, not to DynamoDB or other services.
+   */
+  @InterfaceStability.Unstable
+  public static final String EXPERIMENTAL_AWS_INTERNAL_THROTTLING =
+  "fs.s3a.experimental.aws.internal.throttling";
 
 Review comment:
   Where is the default value for this defined?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1826: HADOOP-16823. Manage S3 Throttling exclusively in S3A client.

2020-02-10 Thread GitBox
bgaborg commented on a change in pull request #1826: HADOOP-16823. Manage S3 
Throttling exclusively in S3A client. 
URL: https://github.com/apache/hadoop/pull/1826#discussion_r377019280
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 ##
 @@ -56,7 +57,13 @@ public AmazonS3 createS3Client(URI name,
   final String userAgentSuffix) throws IOException {
 Configuration conf = getConf();
 final ClientConfiguration awsConf = S3AUtils
-.createAwsConf(getConf(), bucket, Constants.AWS_SERVICE_IDENTIFIER_S3);
+.createAwsConf(conf, bucket, Constants.AWS_SERVICE_IDENTIFIER_S3);
+
+// throttling is explicitly disabled on the S3 client so that
+// all failures are collected
+awsConf.setUseThrottleRetries(
+conf.getBoolean(EXPERIMENTAL_AWS_INTERNAL_THROTTLING, true));
 
 Review comment:
   I'd rather add a setting for it than to set it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16849) start-build-env.sh behaves incorrectly when username is numeric only

2020-02-10 Thread Jihyun Cho (Jira)
Jihyun Cho created HADOOP-16849:
---

 Summary: start-build-env.sh behaves incorrectly when username is 
numeric only
 Key: HADOOP-16849
 URL: https://issues.apache.org/jira/browse/HADOOP-16849
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Jihyun Cho
 Attachments: userid.patch

When username is numaric only, the build environment does not run correctly.
Here is my case.

{noformat}
~/hadoop$ ./start-build-env.sh
...
Successfully tagged hadoop-build-1649860140:latest

 _   _   ___
| | | | | |   |  _  \
| |_| | __ _  __| | ___   ___  _ __   | | | |_   __
|  _  |/ _` |/ _` |/ _ \ / _ \| '_ \  | | | / _ \ \ / /
| | | | (_| | (_| | (_) | (_) | |_) | | |/ /  __/\ V /
\_| |_/\__,_|\__,_|\___/ \___/| .__/  |___/ \___| \_(_)
  | |
  |_|

This is the standard Hadoop Developer build environment.
This has all the right tools installed required to build
Hadoop from source.

I have no name!@fceab279f8d1:~/hadoop$ whoami
whoami: cannot find name for user ID 1112533
I have no name!@fceab279f8d1:~/hadoop$ sudo ls
sudo: unknown uid 1112533: who are you?
{noformat}

I changed {{USER_NAME}} to {{USER_ID}} in the script. Then it worked correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip verifyBuckets check in S3A fs init()

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1838: HADOOP-16711 Add way to skip 
verifyBuckets check in S3A fs init()
URL: https://github.com/apache/hadoop/pull/1838#issuecomment-584032513
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 42s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  91m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1838 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 482fb6d9e33f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1838/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router

2020-02-10 Thread GitBox
hadoop-yetus commented on issue #1832: HDFS-13989. RBF: Add FSCK to the Router
URL: https://github.com/apache/hadoop/pull/1832#issuecomment-584002175
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  30m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m  0s |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 32s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 25s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  7s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 31s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 31s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 24s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 114m 20s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   9m 54s |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 238m 20s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1832 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 074a86392577 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d5467d2 |
   | Default Java | 1.8.0_242 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/3/testReport/ |
   | Max. process+thread count | 2924 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1832/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org