[GitHub] [hadoop] bshashikant commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and refactor the code to the place where it used

2019-10-08 Thread GitBox
bshashikant commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place where it used
URL: https://github.com/apache/hadoop/pull/1596#issuecomment-539846865
 
 
   Thanks @fapifta for workin on this. I have committed this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant merged pull request #1596: HDDS-2233 - Remove ByteStringHelper and refactor the code to the place where it used

2019-10-08 Thread GitBox
bshashikant merged pull request #1596: HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place where it used
URL: https://github.com/apache/hadoop/pull/1596
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and refactor the code to the place where it used

2019-10-08 Thread GitBox
bshashikant commented on issue #1596: HDDS-2233 - Remove ByteStringHelper and 
refactor the code to the place where it used
URL: https://github.com/apache/hadoop/pull/1596#issuecomment-539846644
 
 
   Thanks @fapifta for the clarification. I am +1 on the change as well. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1623: HDDS-2269. Provide config for fair/non-fair for OM RW Lock.

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1623: HDDS-2269. Provide config for 
fair/non-fair for OM RW Lock.
URL: https://github.com/apache/hadoop/pull/1623#issuecomment-539846234
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 23 | hadoop-hdds in trunk failed. |
   | -1 | compile | 17 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 64 | trunk passed |
   | +1 | mvnsite | 1 | trunk passed |
   | +1 | shadedclient | 858 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 960 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 27 | hadoop-hdds in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 27 | hadoop-hdds in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 31 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 34 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 29 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2444 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1623 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux cd7c000a3eaa 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1623: HDDS-2269. Provide config for fair/non-fair for OM RW Lock.

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1623: HDDS-2269. Provide config for 
fair/non-fair for OM RW Lock.
URL: https://github.com/apache/hadoop/pull/1623#issuecomment-539846237
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 17 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 960 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 37 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 22 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in the patch failed. |
   | -1 | compile | 28 | hadoop-hdds in the patch failed. |
   | -1 | compile | 21 | hadoop-ozone in the patch failed. |
   | -1 | javac | 28 | hadoop-hdds in the patch failed. |
   | -1 | javac | 21 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 30 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 36 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 2512 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1623 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 0f7b0cad7e0c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1623/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1431: HDDS-1569 Support creating multiple 
pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-539834587
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1368 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 19 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | mvninstall | 41 | hadoop-ozone in HDDS-1564 failed. |
   | -1 | compile | 18 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | compile | 12 | hadoop-ozone in HDDS-1564 failed. |
   | +1 | checkstyle | 59 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 965 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 17 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | javadoc | 16 | hadoop-ozone in HDDS-1564 failed. |
   | 0 | spotbugs | 1046 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | findbugs | 16 | hadoop-ozone in HDDS-1564 failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | -1 | mvninstall | 30 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 20 | hadoop-hdds in the patch failed. |
   | -1 | compile | 14 | hadoop-ozone in the patch failed. |
   | -1 | javac | 20 | hadoop-hdds in the patch failed. |
   | -1 | javac | 14 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 24 | hadoop-hdds: The patch generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3) |
   | +1 | checkstyle | 26 | The patch passed checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 17 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3838 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux c984624cecab 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 7b5a5fe |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/18/artifact/out/patch-compile-hadoop-hdds.txt
 |
 

[GitHub] [hadoop] bharatviswa504 opened a new pull request #1623: HDDS-2269. Provide config for fair/non-fair for OM RW Lock.

2019-10-08 Thread GitBox
bharatviswa504 opened a new pull request #1623: HDDS-2269. Provide config for 
fair/non-fair for OM RW Lock.
URL: https://github.com/apache/hadoop/pull/1623
 
 
   https://issues.apache.org/jira/browse/HDDS-2269
   
   Provide config in OzoneManager Lock for fair/non-fair for OM RW Lock.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.

2019-10-08 Thread GitBox
sidseth commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans 
for directories-only still does a HEAD.
URL: https://github.com/apache/hadoop/pull/1601#issuecomment-539805965
 
 
   Not really an expert on the S3AFs core functionality. That said, the current 
code does seem quite broken - if HEAD is not included in the probe set.
   
   ```
   if (!key.endsWith("/") && probes.contains(StatusProbeEnum.DirMarker)) {
   ```
   This may need a minor change. I believe the leading key.endsWith("/") is to 
avoid duplicating a HEAD request - if the key already contains a "/". However, 
if the set of probes does not contain "Head", and it does contain "DirMarker", 
but the key ends in a "/" - there won't be any HEAD requests. Is that expected?
   
   Other than this, looks good to me.
   
   General question, unrelated to the patch, is a single LIST not sufficient to 
cover both the with and without "/" scenario?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2019-10-08 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947327#comment-16947327
 ] 

Sammi Chen commented on HADOOP-15616:
-

It's on trunk.  Fix Version updated.  [~weichiu], thanks for the reminder. 

> Incorporate Tencent Cloud COS File System Implementation
> 
>
> Key: HADOOP-15616
> URL: https://issues.apache.org/jira/browse/HADOOP-15616
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/cos
>Reporter: Junping Du
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, 
> HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, 
> HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, 
> HADOOP-15616.009.patch, HADOOP-15616.010.patch, HADOOP-15616.011.patch, 
> Tencent-COS-Integrated-v2.pdf, Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS 
> ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s 
> cloud users but now it is hard for hadoop user to access data laid on COS 
> storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2019-10-08 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-15616:

Fix Version/s: 3.3.0

> Incorporate Tencent Cloud COS File System Implementation
> 
>
> Key: HADOOP-15616
> URL: https://issues.apache.org/jira/browse/HADOOP-15616
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/cos
>Reporter: Junping Du
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, 
> HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, 
> HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, 
> HADOOP-15616.009.patch, HADOOP-15616.010.patch, HADOOP-15616.011.patch, 
> Tencent-COS-Integrated-v2.pdf, Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS 
> ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s 
> cloud users but now it is hard for hadoop user to access data laid on COS 
> storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16615) Add password check for credential provider

2019-10-08 Thread hong dongdong (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947316#comment-16947316
 ] 

hong dongdong commented on HADOOP-16615:


[~ste...@apache.org] pls review PR 
[https://github.com/apache/hadoop/pull/1614], and the test fails seems 
unrelated to this patch.

> Add password check for credential provider
> --
>
> Key: HADOOP-16615
> URL: https://issues.apache.org/jira/browse/HADOOP-16615
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: hong dongdong
>Priority: Major
> Attachments: HADOOP-16615.patch
>
>
> When we use hadoop credential provider to store password, we can not sure if 
> the password is the same as what we remembered.
> So, I think we need a check tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-10-08 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947313#comment-16947313
 ] 

Wei-Chiu Chuang commented on HADOOP-16579:
--

Once I updated zookeeper to 3.5.5 too, curator tests passed. I'll subject this 
patch to more tests.

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hddong commented on issue #1614: HADOOP-16615. Add password check for credential provider

2019-10-08 Thread GitBox
hddong commented on issue #1614: HADOOP-16615. Add password check for 
credential provider
URL: https://github.com/apache/hadoop/pull/1614#issuecomment-539782595
 
 
   @steveloughran HADOOP-16615 patch here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1528: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-539782414
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 148 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 47 | Maven dependency ordering for branch |
   | -1 | mvninstall | 56 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 42 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1074 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1175 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 39 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 18 | hadoop-ozone in trunk failed. |
   | -0 | patch | 1210 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 17 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 17 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 797 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | hadoop-hdds in the patch failed. |
   | -1 | unit | 22 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2841 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7c3bc5a49628 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/10/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 

[GitHub] [hadoop] hadoop-yetus commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1591: HADOOP-16629: support copyFile in 
s3afilesystem
URL: https://github.com/apache/hadoop/pull/1591#issuecomment-539773867
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1076 | trunk passed |
   | -1 | compile | 185 | root in trunk failed. |
   | +1 | checkstyle | 150 | trunk passed |
   | +1 | mvnsite | 115 | trunk passed |
   | +1 | shadedclient | 1095 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 119 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 171 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 87 | the patch passed |
   | -1 | compile | 177 | root in the patch failed. |
   | -1 | javac | 177 | root in the patch failed. |
   | -0 | checkstyle | 143 | root: The patch generated 22 new + 106 unchanged - 
0 fixed = 128 total (was 106) |
   | +1 | mvnsite | 100 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 149 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 106 | the patch passed |
   | +1 | findbugs | 176 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 510 | hadoop-common in the patch failed. |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4595 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestHarFileSystem |
   |   | hadoop.fs.TestFilterFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1591 |
   | JIRA Issue | HADOOP-16629 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 90b812655317 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/branch-compile-root.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/testReport/ |
   | Max. process+thread count | 1412 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16629) support copyFile in s3afilesystem

2019-10-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947296#comment-16947296
 ] 

Hadoop QA commented on HADOOP-16629:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m  
5s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
0s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
57s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 57s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 23s{color} | {color:orange} root: The patch generated 22 new + 106 unchanged 
- 0 fixed = 128 total (was 106) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
2m 29s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.TestFilterFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/1591 |
| 

[GitHub] [hadoop] vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send correct ACL type in ACL requests…

2019-10-08 Thread GitBox
vivekratnavel commented on issue #1528: HDDS-2181. Ozone Manager should send 
correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-539769975
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem

2019-10-08 Thread GitBox
rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in 
s3afilesystem
URL: https://github.com/apache/hadoop/pull/1591#issuecomment-539752856
 
 
   Failure was not related to this patch. Made minor edit to trigger another 
build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947228#comment-16947228
 ] 

Wei-Chiu Chuang commented on HADOOP-16643:
--

This patch passes Cloudera CDP L0 tests. I'll run more tests.

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1622: HDDS-1228. Chunk Scanner Checkpoints

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1622: HDDS-1228. Chunk Scanner Checkpoints
URL: https://github.com/apache/hadoop/pull/1622#issuecomment-539725653
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 728 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 958 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3093 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1622 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a3505a42bf7e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1622/1/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[jira] [Commented] (HADOOP-16644) Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences

2019-10-08 Thread Siddharth Seth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947209#comment-16947209
 ] 

Siddharth Seth commented on HADOOP-16644:
-

Looks like a PUTRequest gives back the modification time, a multipart upload 
does not. Given a multipart upload is likely a long operation anyway - a HEAD 
request following a MultiPartComplete call likely doesn't add a large 
percentage to the operation time (only is S3Guard enabled). For a direct PUT - 
we have the data anyway. Will definitely make me happy to avoid writing to DDB 
during a getSTatus operation.

Using S3 for resource localization - that's got at least one issue which I'm 
aware of. Need to test this, and then file a YARN jira. Essentially - I suspect 
the localizer does not use the JobClient config - so any credentials there will 
not be available to YARN for localization (e.g. client sets up access_key and 
secret_key in config).

> Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences
> 
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8 
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when 
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the 
> overloaded test system) is out of sync with the listing. S3Guard is updated, 
> the corrected date returned and the localisation fails.
>Reporter: Steve Loughran
>Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the 
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone); 
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN  
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:processHeartbeat(1150)) - { 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst, 
> 1570531828143, FILE, null } failed: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource 
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst 
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1610: HDDS-1868. Ozone pipelines should be 
marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539701324
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 949 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 41 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 26 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2429 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1610 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux d0e129b095e4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 87d9f36 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | 

[GitHub] [hadoop] adoroszlai commented on issue #1622: HDDS-1228. Chunk Scanner Checkpoints

2019-10-08 Thread GitBox
adoroszlai commented on issue #1622: HDDS-1228. Chunk Scanner Checkpoints
URL: https://github.com/apache/hadoop/pull/1622#issuecomment-539699634
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1622: HDDS-1228. Chunk Scanner Checkpoints

2019-10-08 Thread GitBox
adoroszlai opened a new pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
URL: https://github.com/apache/hadoop/pull/1622
 
 
   ## What changes were proposed in this pull request?
   
   Save timestamp of last successful data scan for each container (in the 
`.container` file).  After a datanode restart, resume data scanning with the 
container that was least recently scanned.
   
   Newly closed containers have no timestamp and are thus scanned first during 
the next iteration.  This will be changed in 
[HDDS-1369](https://issues.apache.org/jira/browse/HDDS-1369), which proposes to 
scan newly closed containers immediately.
   
   https://issues.apache.org/jira/browse/HDDS-1228
   
   ## How was this patch tested?
   
   Created and closed containers.  Restarted datanode while scanning was in 
progress.  Verified that after the restart, scanner resumed from the container 
where it was interrupted.
   
   ```
   datanode_1  | STARTUP_MSG: Starting HddsDatanodeService
   datanode_1  | 2019-10-08 19:37:07 DEBUG ContainerDataScanner:148 - Scanning 
container 1, last scanned never
   datanode_1  | 2019-10-08 19:37:07 DEBUG ContainerDataScanner:155 - Completed 
scan of container 1 at 2019-10-08T19:37:07.570Z
   datanode_1  | 2019-10-08 19:37:07 INFO  ContainerDataScanner:122 - Completed 
an iteration of container data scrubber in 0 minutes. Number of iterations 
(since the data-node restart) : 1, Number of containers scanned in this 
iteration : 1, Number of unhealthy containers found in this iteration : 0
   datanode_1  | 2019-10-08 19:37:17 DEBUG ContainerDataScanner:148 - Scanning 
container 2, last scanned never
   datanode_1  | 2019-10-08 19:38:57 DEBUG ContainerDataScanner:155 - Completed 
scan of container 2 at 2019-10-08T19:38:57.402Z
   datanode_1  | 2019-10-08 19:38:57 DEBUG ContainerDataScanner:148 - Scanning 
container 1, last scanned at 2019-10-08T19:37:07.570Z
   datanode_1  | 2019-10-08 19:38:57 DEBUG ContainerDataScanner:155 - Completed 
scan of container 1 at 2019-10-08T19:38:57.443Z
   datanode_1  | 2019-10-08 19:38:57 INFO  ContainerDataScanner:122 - Completed 
an iteration of container data scrubber in 1 minutes. Number of iterations 
(since the data-node restart) : 2, Number of containers scanned in this 
iteration : 2, Number of unhealthy containers found in this iteration : 0
   datanode_1  | 2019-10-08 19:38:57 DEBUG ContainerDataScanner:148 - Scanning 
container 3, last scanned never
   datanode_1  | 2019-10-08 19:39:02 DEBUG ContainerDataScanner:155 - Completed 
scan of container 3 at 2019-10-08T19:39:02.402Z
   datanode_1  | 2019-10-08 19:39:02 DEBUG ContainerDataScanner:148 - Scanning 
container 4, last scanned never
   datanode_1  | 2019-10-08 19:39:02 DEBUG ContainerDataScanner:155 - Completed 
scan of container 4 at 2019-10-08T19:39:02.430Z
   datanode_1  | 2019-10-08 19:39:02 DEBUG ContainerDataScanner:148 - Scanning 
container 5, last scanned never
   datanode_1  | 2019-10-08 19:39:11 ERROR HddsDatanodeService:75 - RECEIVED 
SIGNAL 15: SIGTERM
   datanode_1  | STARTUP_MSG: Starting HddsDatanodeService
   datanode_1  | 2019-10-08 19:39:22 DEBUG ContainerDataScanner:148 - Scanning 
container 5, last scanned never
   datanode_1  | 2019-10-08 19:40:18 DEBUG ContainerDataScanner:155 - Completed 
scan of container 5 at 2019-10-08T19:40:18.268Z
   datanode_1  | 2019-10-08 19:40:18 DEBUG ContainerDataScanner:148 - Scanning 
container 6, last scanned never
   datanode_1  | 2019-10-08 19:40:31 DEBUG ContainerDataScanner:155 - Completed 
scan of container 6 at 2019-10-08T19:40:31.735Z
   datanode_1  | 2019-10-08 19:40:31 DEBUG ContainerDataScanner:148 - Scanning 
container 2, last scanned at 2019-10-08T19:38:57.402Z
   datanode_1  | 2019-10-08 19:42:12 DEBUG ContainerDataScanner:155 - Completed 
scan of container 2 at 2019-10-08T19:42:12.128Z
   datanode_1  | 2019-10-08 19:42:12 DEBUG ContainerDataScanner:148 - Scanning 
container 1, last scanned at 2019-10-08T19:38:57.443Z
   datanode_1  | 2019-10-08 19:42:12 DEBUG ContainerDataScanner:155 - Completed 
scan of container 1 at 2019-10-08T19:42:12.140Z
   datanode_1  | 2019-10-08 19:42:12 DEBUG ContainerDataScanner:148 - Scanning 
container 3, last scanned at 2019-10-08T19:39:02.402Z
   datanode_1  | 2019-10-08 19:42:16 DEBUG ContainerDataScanner:155 - Completed 
scan of container 3 at 2019-10-08T19:42:16.629Z
   datanode_1  | 2019-10-08 19:42:16 DEBUG ContainerDataScanner:148 - Scanning 
container 4, last scanned at 2019-10-08T19:39:02.430Z
   datanode_1  | 2019-10-08 19:42:16 DEBUG ContainerDataScanner:155 - Completed 
scan of container 4 at 2019-10-08T19:42:16.669Z
   datanode_1  | 2019-10-08 19:42:16 INFO  ContainerDataScanner:122 - Completed 
an iteration of container data scrubber in 2 minutes. Number of iterations 
(since the data-node restart) : 1, Number of containers scanned in this 
iteration : 6, Number of unhealthy containers found in this iteration : 0
   ```
   
   Also tested upgrade from Ozone 0.4.0.  (Downgrade 

[GitHub] [hadoop] hadoop-yetus commented on issue #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1621: HADOOP-16640. WASB: Override 
getCanonicalServiceName() to return URI
URL: https://github.com/apache/hadoop/pull/1621#issuecomment-539694688
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 789 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 49 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 18 | hadoop-tools/hadoop-azure: The patch generated 0 
new + 27 unchanged - 1 fixed = 27 total (was 28) |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 781 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 24 | the patch passed |
   | +1 | findbugs | 54 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3230 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1621/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1621 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 27a988754752 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 72ae371 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1621/1/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1621/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] christeoh edited a comment on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-08 Thread GitBox
christeoh edited a comment on issue #1582: HDDS-2217. Removed redundant LOG4J 
lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539503434
 
 
   It looks like the yarn resource manager was killed:
   
   
https://github.com/elek/ozone-ci-q4/blob/9cd3522fa8b44bf9e20fe4d22106521768b4c7a0/pr/pr-hdds-2217-cfvmz/acceptance/docker-hadoop27-hadoop27-mapreduce-rm.log#L1984
   
   rm_1| /opt/launcher/plugins/800_launch/launch.sh: line 5:70 
Killed  yarn resourcemanager
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-08 Thread GitBox
christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539689986
 
 
   /retest
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] gkanade commented on issue #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI

2019-10-08 Thread GitBox
gkanade commented on issue #1621: HADOOP-16640. WASB: Override 
getCanonicalServiceName() to return URI
URL: https://github.com/apache/hadoop/pull/1621#issuecomment-539683810
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sjrand commented on a change in pull request #416: YARN-8470. Fix a NPE in identifyContainersToPreemptOnNode()

2019-10-08 Thread GitBox
sjrand commented on a change in pull request #416: YARN-8470. Fix a NPE in 
identifyContainersToPreemptOnNode()
URL: https://github.com/apache/hadoop/pull/416#discussion_r332709330
 
 

 ##
 File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 ##
 @@ -204,6 +204,12 @@ private PreemptableContainers 
identifyContainersToPreemptOnNode(
 for (RMContainer container : containersToCheck) {
   FSAppAttempt app =
   scheduler.getSchedulerApp(container.getApplicationAttemptId());
+  if (app == null) {
+// e.g. "INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: 
Container container_1536156801471_0071_01_96 completed with event FINISHED, 
but corresponding RMContainer doesn't exist."
+LOG.warn("app == null, giving up in 
identifyContainersToPreemptOnNode()");
+return null;
 
 Review comment:
   Should we just `continue` instead of returning `null` since we might still 
be able to find preemptable containers on this node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-08 Thread GitBox
bharatviswa504 commented on issue #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539679361
 
 
   Thank You @anuengineer and @arp7 for the review.
   I have committed this to the trunk.
   
   For fair/non-fair I will make it configurable. I will open a Jira for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-08 Thread GitBox
bharatviswa504 merged pull request #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ opened a new pull request #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI

2019-10-08 Thread GitBox
DadanielZ opened a new pull request #1621: HADOOP-16640. WASB: Override 
getCanonicalServiceName() to return URI
URL: https://github.com/apache/hadoop/pull/1621
 
 
   Add a configuration to override getCanonicalServiceName() to return URI of 
WASB FS.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1589: HDDS-2244. Use new ReadWrite lock in OzoneManager.

2019-10-08 Thread GitBox
anuengineer commented on issue #1589: HDDS-2244. Use new ReadWrite lock in 
OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539665621
 
 
   +1, I am fine with this getting committed. Thanks for taking care of this 
issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16491) Upgrade jetty version to 9.3.27

2019-10-08 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16491:
-
Fix Version/s: 3.2.2
   3.1.4

> Upgrade jetty version to 9.3.27
> ---
>
> Key: HADOOP-16491
> URL: https://issues.apache.org/jira/browse/HADOOP-16491
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.2.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HADOOP-16491-001.patch
>
>
> The current jetty version (9.3.24) has few CVEs (Ref: 
> [https://www.cvedetails.com/version/272598/Eclipse-Jetty-9.3.24.html]). It 
> would be a good idea to upgrade jetty to 9.3.27 (which is the latest version 
> as of today 08/05/2019).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and 
LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539637169
 
 
   Thanks @bharatviswa504 for quick turnaround without need for rebase :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 merged pull request #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on issue #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539636979
 
 
   I have committed this to the trunk.
   Thank You @swagle for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on issue #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539636310
 
 
   Thank You @swagle for the reply.
   As existing logs use like that, can we open a new Jira and fix this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539631944
 
 
   @bharatviswa504 The problem is the parameters are evaluated **fully** as 
function arguments before they are sent to slf4j. The toString() call is what 
happens late. This results in an anti-pattern where a developer thinks writing 
a trace call with smth like LOG.trace(ExceptionUtils.stackTrace(new 
IOException())) [cooked up call], let's say can be done without if statement in 
a busy part of the codebase without an side-effects and ends up being a perf 
bottleneck. We already have such examples in Ozone code. Hence, decided to do a 
blanket change first and then write something to do a checkstyle verification 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539631944
 
 
   @bharatviswa504 The problem is the parameters are evaluated **fully** as 
function arguments before they are sent to slf4j. The toString() call is what 
happens late. This results in an anti-pattern where a developer things writing 
a trace call with smth like LOG.trace(ExceptionUtils.stackTrace(new 
IOException())) [cooked up call], let's say can be done without if statement in 
a busy part of the codebase without an side-effects and ends up being a perf 
bottleneck. We already have such examples in Ozone code. Hence, decided to do a 
blanket change first and then write something to do a checkstyle verification 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and 
LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539633630
 
 
   From Java spec:
   
   Example 15.12.4.1-2. Evaluation Order During Method Invocation
   
   As part of an instance method invocation (§15.12), there is an expression 
that denotes the object to be invoked. This expression appears to be fully 
evaluated before any part of any argument expression to the method invocation 
is evaluated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620#issuecomment-539632520
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1217 | trunk passed |
   | +1 | compile | 33 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 37 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 59 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | -0 | checkstyle | 18 | hadoop-tools/hadoop-aws: The patch generated 1 new 
+ 12 unchanged - 0 fixed = 13 total (was 12) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 883 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3607 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1620 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b11807aba0de 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91320b4 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/testReport/ |
   | Max. process+thread count | 427 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1620/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle edited a comment on issue #1612: HDDS-2260. Avoid evaluation of 
LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539631944
 
 
   @bharatviswa504 The problem is the parameters are evaluated **fully** as 
function arguments before they are sent to slf4j. The toString() call is what 
happens late. This results in an anti-patter where a developer things writing a 
trace call with smth like LOG.trace(ExceptionUtils.stackTrace(new 
IOException())) [cooked up call], let's say can be done without if statement in 
a busy part of the codebase without an side-effects and ends up being a perf 
bottleneck. We already have such examples in Ozone code. Hence, decided to do a 
blanket change first and then write something to do a checkstyle verification 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and 
LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539631944
 
 
   @bharatviswa504 The problem is the parameters are evaluated **fully** as 
function arguments before they are sent to slf4j. The toString() call is what 
happens late. This results in an anti-patter where a developer things writing a 
trace call with smth like ExceptionUtils.stackTrace(new IOException()) [cooked 
up call], let's say can be done without if statement in a busy part of the 
codebase and ends up being a perf bottleneck. We already have such examples in 
Ozone code. Hence, decided to do a blanket change first and then write 
something to do a checkstyle verification later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1555: HDDS-1984. Fix listBucket API.

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1555: HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#issuecomment-539630568
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | -1 | mvninstall | 38 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 959 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 20 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 20 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2468 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1555 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b7315d686214 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91320b4 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1555/6/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 

[GitHub] [hadoop] avijayanhwx commented on a change in pull request #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-08 Thread GitBox
avijayanhwx commented on a change in pull request #1610: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#discussion_r332649863
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineReportHandler.java
 ##
 @@ -102,12 +106,24 @@ private void processPipelineReport(PipelineReport 
report, DatanodeDetails dn)
   return;
 }
 
+if (report.hasLeaderID()) {
+  Map ids =
+  reportedLeadersForPipeline.computeIfAbsent(pipelineID,
+  k -> new HashMap<>());
+  ids.put(dn.getUuid(), report.getLeaderID());
+}
+
 if (pipeline.getPipelineState() == Pipeline.PipelineState.ALLOCATED) {
 
 Review comment:
   This will not cover OPEN pipelines where there is a leader election. We need 
to update SCM's internal pipeline's leaderId to the new leader for those leader 
elections as well. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on a change in pull request #1610: HDDS-1868. Ozone pipelines should be marked as ready only after the leader election is complete.

2019-10-08 Thread GitBox
avijayanhwx commented on a change in pull request #1610: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#discussion_r332650299
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
 ##
 @@ -314,4 +318,71 @@ public void testActivateDeactivatePipeline() throws 
IOException {
 
 pipelineManager.close();
   }
+
+  @Test
+  public void testPipelineOpenOnlyWhenLeaderReported() throws Exception {
+EventQueue eventQueue = new EventQueue();
+SCMPipelineManager pipelineManager =
+new SCMPipelineManager(conf, nodeManager, eventQueue, null);
+PipelineProvider mockRatisProvider =
+new MockRatisPipelineProvider(nodeManager,
+pipelineManager.getStateManager(), conf);
+pipelineManager.setPipelineProvider(HddsProtos.ReplicationType.RATIS,
+mockRatisProvider);
+Pipeline pipeline = pipelineManager
+.createPipeline(HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.THREE);
+// close manager
+pipelineManager.close();
+// new pipeline manager loads the pipelines from the db in ALLOCATED state
+pipelineManager =
+new SCMPipelineManager(conf, nodeManager, eventQueue, null);
+mockRatisProvider =
+new MockRatisPipelineProvider(nodeManager,
+pipelineManager.getStateManager(), conf);
+pipelineManager.setPipelineProvider(HddsProtos.ReplicationType.RATIS,
+mockRatisProvider);
+Assert.assertEquals(Pipeline.PipelineState.ALLOCATED,
+pipelineManager.getPipeline(pipeline.getId()).getPipelineState());
+
+SCMSafeModeManager scmSafeModeManager =
+new SCMSafeModeManager(new OzoneConfiguration(),
+new ArrayList<>(), pipelineManager, eventQueue);
+PipelineReportHandler pipelineReportHandler =
+new PipelineReportHandler(scmSafeModeManager, pipelineManager, conf);
+
+// Report pipelines with leaders
+List nodes = pipeline.getNodes();
+Assert.assertEquals(3, nodes.size());
+// Send leader for only first 2 dns
+nodes.subList(0 ,2).forEach(dn ->
+sendPipelineReport(dn, pipeline, pipelineReportHandler, true));
+sendPipelineReport(nodes.get(2), pipeline, pipelineReportHandler, false);
+
+Assert.assertEquals(Pipeline.PipelineState.ALLOCATED,
+pipelineManager.getPipeline(pipeline.getId()).getPipelineState());
+
 
 Review comment:
   Maybe we can add a unit test case where there is a leader change in an open 
pipeline as well. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] swagle commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle commented on a change in pull request #1612: HDDS-2260. Avoid evaluation 
of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332648694
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/HddsVersionInfo.java
 ##
 @@ -50,7 +50,9 @@ public static void main(String[] args) {
 "Compiled with protoc " + HDDS_VERSION_INFO.getProtocVersion());
 System.out.println(
 "From source with checksum " + HDDS_VERSION_INFO.getSrcChecksum());
-LOG.debug("This command was run using " +
-ClassUtil.findContainingJar(HddsVersionInfo.class));
+if (LOG.isDebugEnabled()) {
+  LOG.debug("This command was run using " +
 
 Review comment:
   Hi @bharatviswa504, because of the **if** condition check, the expression 
evaluation is guaranteed so eager or late didn't matter, hence did not make 
these changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-08 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947076#comment-16947076
 ] 

Gopal Vijayaraghavan commented on HADOOP-16641:
---


No, I'm talking to RPC.RpcKind.RPC_PROTOCOL_BUFFER, which isn't deprecated 
AFAIK.

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L1193

{code}
RpcStatusProto status = header.getStatus();
if (status == RpcStatusProto.SUCCESS) {
  Writable value = packet.newInstance(valueClass, conf);
{code}



> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 
> When reading from HDFS with good locality, this fills up the contended lock 
> profile with almost no other contributors to the locking - see  
> [^llap-rpc-locks.svg] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1576: HADOOP-16520 dynamodb ms version race refactor.

2019-10-08 Thread GitBox
steveloughran commented on issue #1576: HADOOP-16520 dynamodb ms version race 
refactor. 
URL: https://github.com/apache/hadoop/pull/1576#issuecomment-539616795
 
 
   One change for this: can we stop telling the user all is good at info? It's 
the good state.
   
   I saw this on a bucket-info command where this is the first message everyone 
sees. Assuming this is constant for all other commands, it's a distraction
   
   ```
   bin/hadoop s3guard bucket-info -guarded s3a://landsat-pds/
   2019-10-08 18:14:02,439 [main] INFO  
s3guard.DynamoDBMetadataStoreTableHandler 
(DynamoDBMetadataStoreTableHandler.java:verifyVersionCompatibility(423)) - 
Table s3guard-us-west-2 contains correct version marker TAG and ITEM.
   Filesystem s3a://landsat-pds
   Location: us-west-2
   Filesystem s3a://landsat-pds is using S3Guard with store 
DynamoDBMetadataStore{region=us-west-2, tableName=s3guard-us-west-2, 
tableArn=arn:aws:dynamodb:us-west-2:980678866538:table/s3guard-us-west-2}
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1576: HADOOP-16520 dynamodb ms version race refactor.

2019-10-08 Thread GitBox
hadoop-yetus removed a comment on issue #1576: HADOOP-16520 dynamodb ms version 
race refactor. 
URL: https://github.com/apache/hadoop/pull/1576#issuecomment-537571813
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1177 | trunk passed |
   | +1 | compile | 31 | trunk passed |
   | +1 | checkstyle | 22 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   | 0 | spotbugs | 58 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 56 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 36 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | -0 | checkstyle | 20 | hadoop-tools/hadoop-aws: The patch generated 12 new 
+ 34 unchanged - 0 fixed = 46 total (was 34) |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 903 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 22 | the patch passed |
   | -1 | findbugs | 64 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 81 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3541 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStoreTableHandler.getVersionMarkerFromTags(Table,
 AmazonDynamoDB)  At 
DynamoDBMetadataStoreTableHandler.java:org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStoreTableHandler.getVersionMarkerFromTags(Table,
 AmazonDynamoDB)  At DynamoDBMetadataStoreTableHandler.java:[line 256] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1576/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1576 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 96c688cdb422 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 61a8436 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1576/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1576/3/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1576/3/testReport/ |
   | Max. process+thread count | 355 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1576/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails 
if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-539612732
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1071 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 40 | trunk passed |
   | +1 | shadedclient | 788 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 30 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 780 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | hadoop-tools_hadoop-aws generated 1 new + 5 unchanged 
- 0 fixed = 6 total (was 5) |
   | +1 | findbugs | 61 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 87 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3330 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1619 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d042b3ad22c1 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91320b4 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-08 Thread GitBox
arp7 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332629277
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   Makes sense. However let's see if there is an alternate way to do this. We 
should not have reflection for type-specific behavior. It is fragile.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1620: HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread GitBox
steveloughran opened a new pull request #1620: HADOOP-16642. 
ITestDynamoDBMetadataStoreScale fails when throttled.
URL: https://github.com/apache/hadoop/pull/1620
 
 
   Change-Id: I1bbb4692c7fe345a0e5c3d3660eeb644ca9ced2d
   
   tests against s3 ireland but its not my laptop seeing the failure -this was 
an in-EC2 test run. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16642:

Summary: ITestDynamoDBMetadataStoreScale fails when throttled.  (was: 
ITestDynamoDBMetadataStoreScale failing as the error text does not match 
expectations)

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.

2019-10-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16642:
---

Assignee: Steve Loughran

> ITestDynamoDBMetadataStoreScale fails when throttled.
> -
>
> Key: HADOOP-16642
> URL: https://issues.apache.org/jira/browse/HADOOP-16642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table 
> isn't PAYG. Its failing with the wrong text being returned.
> Proposed: don't look for any text
> {code} 
> 13:06:22 java.lang.AssertionError: 
> 13:06:22 Expected throttling message:  Expected to find ' This may be because 
> the write threshold of DynamoDB is set too low.' 
> but got unexpected exception: 
> org.apache.hadoop.fs.s3a.AWSServiceThrottledException: 
> Put tombstone on s3a://fake-bucket/moved-here: 
> com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException:
>  
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
> ProvisionedThroughputExceededException; 
> Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): 
> The level of configured provisioned throughput for the table was exceeded. 
> Consider increasing your provisioning level with the UpdateTable API. 
> (Service: AmazonDynamoDBv2; Status Code: 400; 
> Error Code: ProvisionedThroughputExceededException; Request ID: 
> L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG)
> 13:06:22  at 
> org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402)
> 13
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332621171
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
 ##
 @@ -216,9 +218,11 @@ private void deleteKeyValueContainerBlocks(
   DFSUtil.string2Bytes(OzoneConsts.DELETED_KEY_PREFIX + blk);
   if (containerDB.getStore().get(deletingKeyBytes) != null
   || containerDB.getStore().get(deletedKeyBytes) != null) {
-LOG.debug(String.format(
-"Ignoring delete for block %d in container %d."
-+ " Entry already added.", blk, containerId));
+if (LOG.isDebugEnabled()) {
+  LOG.debug(String.format(
+  "Ignoring delete for block %d in container %d."
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332621128
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
 ##
 @@ -196,9 +196,11 @@ private void deleteKeyValueContainerBlocks(
 }
 
 if (delTX.getTxID() < containerData.getDeleteTransactionId()) {
-  LOG.debug(String.format("Ignoring delete blocks for containerId: %d."
-  + " Outdated delete transactionId %d < %d", containerId,
-  delTX.getTxID(), containerData.getDeleteTransactionId()));
+  if (LOG.isDebugEnabled()) {
+LOG.debug(String.format("Ignoring delete blocks for containerId: %d."
++ " Outdated delete transactionId %d < %d", containerId,
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332617494
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -480,10 +486,12 @@ private ExecutorService getCommandExecutor(
   } else {
 metrics.incNumBytesWrittenCount(
 requestProto.getWriteChunk().getChunkData().getLen());
-LOG.debug(gid +
-": writeChunk writeStateMachineData  completed: blockId" +
-write.getBlockID() + " logIndex " + entryIndex + " chunkName " +
-write.getChunkData().getChunkName());
+if (LOG.isDebugEnabled()) {
+  LOG.debug(gid +
+  ": writeChunk writeStateMachineData  completed: blockId" +
+  write.getBlockID() + " logIndex " + entryIndex + " chunkName " +
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332617757
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
 ##
 @@ -120,9 +120,11 @@ public long putBlock(Container container, BlockData data) 
throws IOException {
   container.updateBlockCommitSequenceId(bcsId);
   // Increment keycount here
   container.getContainerData().incrKeyCount();
-  LOG.debug(
-  "Block " + data.getBlockID() + " successfully committed with bcsId "
-  + bcsId + " chunk size " + data.getChunks().size());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"Block " + data.getBlockID() + " successfully committed with bcsId 
"
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332617411
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -462,9 +466,11 @@ private ExecutorService getCommandExecutor(
 }, chunkExecutor);
 
 writeChunkFutureMap.put(entryIndex, writeChunkFuture);
-LOG.debug(gid + ": writeChunk writeStateMachineData : blockId " +
-write.getBlockID() + " logIndex " + entryIndex + " chunkName "
-+ write.getChunkData().getChunkName());
+if (LOG.isDebugEnabled()) {
+  LOG.debug(gid + ": writeChunk writeStateMachineData : blockId " +
+  write.getBlockID() + " logIndex " + entryIndex + " chunkName "
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616954
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerCommandRequestPBHelper.java
 ##
 @@ -134,9 +136,11 @@ private ContainerCommandRequestPBHelper() {
 auditParams.put("blockData",
 BlockData.getFromProtoBuf(msg.getPutSmallFile()
 .getBlock().getBlockData()).toString());
-  }catch (IOException ex){
-LOG.trace("Encountered error parsing BlockData from protobuf:"
-+ ex.getMessage());
+  } catch (IOException ex){
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Encountered error parsing BlockData from protobuf: "
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616504
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java
 ##
 @@ -153,7 +155,9 @@ public XceiverClientReply watchOnLastIndex()
   long index =
   commitIndex2flushedDataMap.keySet().stream().mapToLong(v -> v).max()
   .getAsLong();
-  LOG.debug("waiting for last flush Index " + index + " to catch up");
+  if (LOG.isDebugEnabled()) {
+LOG.debug("waiting for last flush Index " + index + " to catch up");
 
 Review comment:
   Same as above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616579
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/CommitWatcher.java
 ##
 @@ -131,7 +131,9 @@ public XceiverClientReply watchOnFirstIndex() throws 
IOException {
   long index =
   commitIndex2flushedDataMap.keySet().stream().mapToLong(v -> v).min()
   .getAsLong();
-  LOG.debug("waiting for first index " + index + " to catch up");
+  if (LOG.isDebugEnabled()) {
+LOG.debug("waiting for first index " + index + " to catch up");
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616117
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
 ##
 @@ -192,8 +192,10 @@ public boolean isEmpty() {
   }
 }
   }
-  LOG.debug("Serialize pipeline {} with nodesInOrder{ }", id.toString(),
-  nodes);
+  if (LOG.isDebugEnabled()) {
+LOG.debug("Serialize pipeline {} with nodesInOrder{ }", id.toString(),
 
 Review comment:
   Additional whitespace is second {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616421
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -620,9 +626,11 @@ private void writeChunkToContainer(ByteBuffer chunk) 
throws IOException {
   throw new IOException(
   "Unexpected Storage Container Exception: " + e.toString(), e);
 }
-LOG.debug(
-"writing chunk " + chunkInfo.getChunkName() + " blockID " + blockID
-+ " length " + effectiveChunkSize);
+if (LOG.isDebugEnabled()) {
+  LOG.debug(
+  "writing chunk " + chunkInfo.getChunkName() + " blockID " + blockID
+  + " length " + effectiveChunkSize);
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332616377
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -609,9 +613,11 @@ private void writeChunkToContainer(ByteBuffer chunk) 
throws IOException {
 }
 return e;
   }, responseExecutor).exceptionally(e -> {
-LOG.debug(
-"writing chunk failed " + chunkInfo.getChunkName() + " blockID "
-+ blockID + " with exception " + e.getLocalizedMessage());
+if (LOG.isDebugEnabled()) {
+  LOG.debug(
+  "writing chunk failed " + chunkInfo.getChunkName() + " blockID "
+  + blockID + " with exception " + e.getLocalizedMessage());
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332614455
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/HddsVersionInfo.java
 ##
 @@ -50,7 +50,9 @@ public static void main(String[] args) {
 "Compiled with protoc " + HDDS_VERSION_INFO.getProtocVersion());
 System.out.println(
 "From source with checksum " + HDDS_VERSION_INFO.getSrcChecksum());
-LOG.debug("This command was run using " +
-ClassUtil.findContainingJar(HddsVersionInfo.class));
+if (LOG.isDebugEnabled()) {
+  LOG.debug("This command was run using " +
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332613431
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
 ##
 @@ -392,22 +392,26 @@ private void watchForCommit(boolean bufferFull) throws 
IOException {
   .equals(responseBlockID.getContainerBlockID()));
   // updates the bcsId of the block
   blockID = responseBlockID;
-  LOG.debug(
-  "Adding index " + asyncReply.getLogIndex() + " commitMap size "
-  + commitWatcher.getCommitInfoMapSize() + " flushLength "
-  + flushPos + " numBuffers " + byteBufferList.size()
-  + " blockID " + blockID + " bufferPool size" + bufferPool
-  .getSize() + " currentBufferIndex " + bufferPool
-  .getCurrentBufferIndex());
+  if (LOG.isDebugEnabled()) {
+LOG.debug(
+"Adding index " + asyncReply.getLogIndex() + " commitMap size "
++ commitWatcher.getCommitInfoMapSize() + " flushLength "
++ flushPos + " numBuffers " + byteBufferList.size()
 
 Review comment:
   Use {}


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1612: HDDS-2260. Avoid 
evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#discussion_r332612988
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -283,7 +285,9 @@ private XceiverClientReply sendCommandWithRetry(
 }
 for (DatanodeDetails dn : datanodeList) {
   try {
-LOG.debug("Executing command " + request + " on datanode " + dn);
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Executing command " + request + " on datanode " + dn);
 
 Review comment:
   Instead of +, can we use {}
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-08 Thread GitBox
steveloughran opened a new pull request #1619: HADOOP-16478. S3Guard 
bucket-info fails if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619
 
 
   
   
   -Catch and downgrade to info
   -add to javadocs
   -review all other uses
   -test in ITestAssumeRole; needs to open up a bit more of the tool for this.
   
   
   
   Tested s3 ireland. initially tested without the downgrade, to verify the 
test created the failure mode.
   
   It did:
   
   ```
   [ERROR] 
testBucketLocationForbidden(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole)  
Time elapsed: 3.957 s  <<< ERROR!
   java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
getBucketLocation() on hwdev-steve-ireland-new: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied;
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testBucketLocationForbidden(ITestAssumeRole.java:754)
   Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied 
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; 
at 
org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testBucketLocationForbidden(ITestAssumeRole.java:754)
   ```
   
   With the handler in the bucket info tool, the test worked.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947018#comment-16947018
 ] 

Hadoop QA commented on HADOOP-16638:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
43s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
55s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
49s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 49s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
28s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HADOOP-16638 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982498/HADOOP-16638.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  |
| uname | Linux 1c7c29b051c9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91320b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16582/artifact/out/branch-compile-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16582/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16582/artifact/out/patch-compile-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16582/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16582/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: 

[GitHub] [hadoop] swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS).

2019-10-08 Thread GitBox
swagle commented on issue #1612: HDDS-2260. Avoid evaluation of LOG.trace and 
LOG.debug statement in the read/write path (HDDS).
URL: https://github.com/apache/hadoop/pull/1612#issuecomment-539582515
 
 
   Test failures are unrelated, verified by running locally. 
TestOMKeyCreateRequest has failures without the patch as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332593486
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   With current new code when the list happens we should consider entries from 
buffer and DB. (As we return the response to end-user after adding entries to 
cache). So, if user does list as next operation(next to create bucket) the 
bucket might/might not be there until double buffer flushes. As until double 
buffer flushes, we will have entries in cache. (This will not be problem for 
non-HA, as we return the response, only after the flush)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-08 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946997#comment-16946997
 ] 

Kihwal Lee commented on HADOOP-16641:
-

I believe some unit tests still use the writable engine.  E.g. 
{{RPCCallBenchmark}} allows use of {{WritableRpcEngine}}. 

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 
> When reading from HDFS with good locality, this fills up the contended lock 
> profile with almost no other contributors to the locking - see  
> [^llap-rpc-locks.svg] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332591385
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   The key cache is not full cache, so if double buffer flush is going on well 
in background, this should have around couple of 100 entries. When I started 
freon with 10 threads, i see the value of maximum iteration is 200. So, almost 
in the cache we have 200 entries. (But on tried with busy workload clusters, 
slow disks)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix listkeys API.

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332590228
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
 ##
 @@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+
+/**
+ * Tests OzoneManager MetadataManager.
+ */
+public class TestOmMetadataManager {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneConfiguration ozoneConfiguration;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS,
+folder.getRoot().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+  }
+  @Test
+  public void testListKeys() throws Exception {
+
+String volumeNameA = "volumeA";
+String volumeNameB = "volumeB";
+String ozoneBucket = "ozoneBucket";
+String hadoopBucket = "hadoopBucket";
+
+
+// Create volumes and buckets.
+TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
+TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
+addBucketsToCache(volumeNameA, ozoneBucket);
+addBucketsToCache(volumeNameB, hadoopBucket);
+
+
+String prefixKeyA = "key-a";
+String prefixKeyB = "key-b";
+TreeSet keysASet = new TreeSet<>();
+TreeSet keysBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysASet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+TreeSet keysAVolumeBSet = new TreeSet<>();
+TreeSet keysBVolumeBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysAVolumeBSet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBVolumeBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+// List all keys which have prefix "key-a"
+List omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+null, prefixKeyA, 100);
+
+Assert.assertEquals(omKeyInfoList.size(),  50);
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+}
+
+
+String startKey = prefixKeyA + 10;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+startKey = prefixKeyA + 38;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+  

[jira] [Commented] (HADOOP-16478) S3Guard bucket-info fails if the bucket location is denied to the caller

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946990#comment-16946990
 ] 

Steve Loughran commented on HADOOP-16478:
-

metastore already does this, but reviewed the message and tuned both it and the 
exception.

> S3Guard bucket-info fails if the bucket location is denied to the caller
> 
>
> Key: HADOOP-16478
> URL: https://issues.apache.org/jira/browse/HADOOP-16478
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> IF you call "Hadoop s3guard bucket info" on a bucket and you don't have 
> permission to list the bucket location, then you get a stack trace, with all 
> other diagnostics being missing.
> Preferred: catch the exception, warn its unknown and only log@ debug



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix listBucket API.

2019-10-08 Thread GitBox
bharatviswa504 commented on a change in pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332589314
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   Ya, you are right, my initial approach was that. But as this Table interface 
is extended by RDBTable which has byte[] as the parameter, that cannot be done. 
And also, for now, this is used for Bucket and VolumeTable we should be good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946985#comment-16946985
 ] 

Wei-Chiu Chuang commented on HADOOP-16643:
--

Thanks for the patch, [~leosun08].

We internally use netty 4.1, and upstream trunk uses netty 4.0. Updating for my 
internal branch should be trivial, but for upstream Hadoop, a minor release 
update like this usually brings more changes than it looks like. I'll apply 
this patch to our internal branch and test it out. Will report back.

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16615) Add password check for credential provider

2019-10-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946968#comment-16946968
 ] 

Steve Loughran commented on HADOOP-16615:
-

looks good; there's some minor changes in the test code I'd like. 

Could you submit this as a github PR?

> Add password check for credential provider
> --
>
> Key: HADOOP-16615
> URL: https://issues.apache.org/jira/browse/HADOOP-16615
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: hong dongdong
>Priority: Major
> Attachments: HADOOP-16615.patch
>
>
> When we use hadoop credential provider to store password, we can not sure if 
> the password is the same as what we remembered.
> So, I think we need a check tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Status: Patch Available  (was: Open)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch, 
> HADOOP-16638.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Attachment: HADOOP-16638.3.patch

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch, 
> HADOOP-16638.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Status: Open  (was: Patch Available)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch, 
> HADOOP-16638.3.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16641) RPC: Heavy contention on Configuration.getClassByNameOrNull

2019-10-08 Thread Daryn Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946946#comment-16946946
 ] 

Daryn Sharp commented on HADOOP-16641:
--

Is this a profile of a protocol using the essentially deprecated 
WritableRpcEngine?

> RPC: Heavy contention on Configuration.getClassByNameOrNull 
> 
>
> Key: HADOOP-16641
> URL: https://issues.apache.org/jira/browse/HADOOP-16641
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal Vijayaraghavan
>Priority: Major
>  Labels: performance
> Attachments: config-get-class-by-name.png, llap-rpc-locks.svg
>
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2589
> {code}
> map = Collections.synchronizedMap(
>   new WeakHashMap>>());
> {code}
> This synchronizes all lookups across the same class-loader across all threads 
> & yields rpc threads.
>  !config-get-class-by-name.png! 
> When reading from HDFS with good locality, this fills up the contended lock 
> profile with almost no other contributors to the locking - see  
> [^llap-rpc-locks.svg] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946941#comment-16946941
 ] 

Hadoop QA commented on HADOOP-16638:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m  
5s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
43m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
56s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 56s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HADOOP-16638 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982489/HADOOP-16638.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  |
| uname | Linux a55dec848c8d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91320b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16581/artifact/out/branch-compile-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16581/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16581/artifact/out/patch-compile-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16581/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16581/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: 

[jira] [Commented] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946927#comment-16946927
 ] 

Hadoop QA commented on HADOOP-16643:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HADOOP-16643 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982486/HADOOP-16643.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 035cc07e4c01 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91320b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16580/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5500) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16580/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian 

[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2019-10-08 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946911#comment-16946911
 ] 

Wei-Chiu Chuang commented on HADOOP-15616:
--

This is in trunk, right? We should update the Fix Version to 3.3.0, so that RM 
doesn't skip this one.

> Incorporate Tencent Cloud COS File System Implementation
> 
>
> Key: HADOOP-15616
> URL: https://issues.apache.org/jira/browse/HADOOP-15616
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/cos
>Reporter: Junping Du
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, 
> HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, 
> HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, 
> HADOOP-15616.009.patch, HADOOP-15616.010.patch, HADOOP-15616.011.patch, 
> Tencent-COS-Integrated-v2.pdf, Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS 
> ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s 
> cloud users but now it is hard for hadoop user to access data laid on COS 
> storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 
1.19 to 2.x
URL: https://github.com/apache/hadoop/pull/763#issuecomment-539519765
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1236 | trunk passed |
   | -1 | compile | 198 | root in trunk failed. |
   | +1 | checkstyle | 166 | trunk passed |
   | +1 | mvnsite | 256 | trunk passed |
   | +1 | shadedclient | 1281 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 270 | trunk passed |
   | 0 | spotbugs | 72 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 17 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | compile | 205 | root in the patch failed. |
   | -1 | javac | 205 | root in the patch failed. |
   | -0 | checkstyle | 169 | root: The patch generated 11 new + 255 unchanged - 
14 fixed = 266 total (was 269) |
   | -1 | mvnsite | 27 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 13 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 253 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 48 | hadoop-hdfs-project_hadoop-hdfs-rbf generated 7 new + 
0 unchanged - 0 fixed = 7 total (was 0) |
   | 0 | findbugs | 13 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 43 | hadoop-common-project/hadoop-kms generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -1 | findbugs | 23 | hadoop-hdfs-rbf in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 12 | hadoop-project in the patch passed. |
   | +1 | unit | 575 | hadoop-common in the patch passed. |
   | +1 | unit | 212 | hadoop-kms in the patch passed. |
   | -1 | unit | 7059 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 298 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | unit | 32 | hadoop-hdfs-rbf in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 13806 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-kms |
   |  |  Dead store to requestURL in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:[line 181] |
   | Failed junit tests | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.web.TestWebHDFSForHA |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.web.TestHttpsFileSystem |
   |   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   |   | hadoop.hdfs.security.TestDelegationToken |
   |   | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.tools.TestWebHDFSStoragePolicyCommands |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.web.TestWebHdfsTokens |
   |   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
   |   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
   |   | hadoop.hdfs.web.TestWebHdfsUrl |
   |   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
   |   | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.web.TestWebHDFSXAttr |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServer |
   |   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServerWebServer |
   |   | hadoop.fs.http.server.TestHttpFSServerWebServerWithRandomSecret |
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x

2019-10-08 Thread GitBox
hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. 
Update jersey from 1.19 to 2.x
URL: https://github.com/apache/hadoop/pull/763#discussion_r332518365
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
 ##
 @@ -57,13 +54,11 @@ public ParametersProvider(String driverParam, Class enumClass,
   }
 
   @Override
-  @SuppressWarnings("unchecked")
-  public Parameters getValue(HttpContext httpContext) {
+  public Parameters provide() {
 Map>> map = new HashMap>>();
-Map> queryString =
-  httpContext.getRequest().getQueryParameters();
-String str = ((MultivaluedMap) queryString).
-getFirst(driverParam);
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HADOOP-16643:
-

Assignee: Lisheng Sun

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Attachment: HADOOP-16638.2.patch

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Status: Patch Available  (was: Open)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps

2019-10-08 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16638:

Status: Open  (was: Patch Available)

> Use Relative URLs in Hadoop KMS WebApps
> ---
>
> Key: HADOOP-16638
> URL: https://issues.apache.org/jira/browse/HADOOP-16638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines from docker configurations

2019-10-08 Thread GitBox
christeoh commented on issue #1582: HDDS-2217. Removed redundant LOG4J lines 
from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539503434
 
 
   Is there more detail to the log that explains why the smoketest failed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x

2019-10-08 Thread GitBox
hadoop-yetus commented on a change in pull request #763: [WIP] HADOOP-15984. 
Update jersey from 1.19 to 2.x
URL: https://github.com/apache/hadoop/pull/763#discussion_r332491991
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
 ##
 @@ -57,13 +54,11 @@ public ParametersProvider(String driverParam, Class enumClass,
   }
 
   @Override
-  @SuppressWarnings("unchecked")
-  public Parameters getValue(HttpContext httpContext) {
+  public Parameters provide() {
 Map>> map = new HashMap>>();
-Map> queryString =
-  httpContext.getRequest().getQueryParameters();
-String str = ((MultivaluedMap) queryString).
-getFirst(driverParam);
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x

2019-10-08 Thread GitBox
hadoop-yetus commented on issue #763: [WIP] HADOOP-15984. Update jersey from 
1.19 to 2.x
URL: https://github.com/apache/hadoop/pull/763#issuecomment-539497460
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1135 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1388 | trunk passed |
   | -1 | compile | 199 | root in trunk failed. |
   | +1 | checkstyle | 166 | trunk passed |
   | +1 | mvnsite | 290 | trunk passed |
   | +1 | shadedclient | 1293 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 293 | trunk passed |
   | 0 | spotbugs | 73 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 18 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 35 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | compile | 194 | root in the patch failed. |
   | -1 | javac | 194 | root in the patch failed. |
   | -0 | checkstyle | 171 | root: The patch generated 11 new + 254 unchanged - 
14 fixed = 265 total (was 268) |
   | -1 | mvnsite | 25 | hadoop-hdfs-rbf in the patch failed. |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 8 | The patch has no ill-formed XML file. |
   | -1 | shadedclient | 233 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 44 | hadoop-hdfs-project_hadoop-hdfs-rbf generated 7 new + 
0 unchanged - 0 fixed = 7 total (was 0) |
   | 0 | findbugs | 14 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 44 | hadoop-common-project/hadoop-kms generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -1 | findbugs | 26 | hadoop-hdfs-rbf in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 18 | hadoop-project in the patch passed. |
   | +1 | unit | 623 | hadoop-common in the patch passed. |
   | +1 | unit | 248 | hadoop-kms in the patch passed. |
   | -1 | unit | 7701 | hadoop-hdfs in the patch failed. |
   | -1 | unit | 231 | hadoop-hdfs-httpfs in the patch failed. |
   | -1 | unit | 26 | hadoop-hdfs-rbf in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 15791 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-kms |
   |  |  Dead store to requestURL in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map)  At 
KMS.java:[line 181] |
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTokens |
   |   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
   |   | hadoop.hdfs.tools.TestWebHDFSStoragePolicyCommands |
   |   | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.web.TestWebHDFSXAttr |
   |   | hadoop.hdfs.web.TestWebHdfsUrl |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
   |   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
   |   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
   |   | hadoop.hdfs.web.TestHttpsFileSystem |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.fs.TestSymlinkHdfsFileSystem |
   |   | hadoop.hdfs.web.TestWebHDFSForHA |
   |   | hadoop.fs.TestSymlinkHdfsFileContext |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServerNoXAttrs |
   |   | hadoop.fs.http.server.TestHttpFSServerWebServer |
   |   | hadoop.fs.http.server.TestHttpFSServer |
   |   | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServerWebServerWithRandomSecret |
   |   | hadoop.fs.http.server.TestHttpFSServerNoACLs |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   
   
   | Subsystem | 

[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16643:
-
Status: Patch Available  (was: Open)

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16643:
-
Attachment: HADOOP-16643.001.patch

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage - as a file system in Hadoop

2019-10-08 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946805#comment-16946805
 ] 

Hadoop QA commented on HADOOP-16492:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-cloud-storage-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-cloud-storage-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-cloud-storage-project {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 18 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 5 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-cloud-storage-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-cloud-storage-project in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-huaweicloud in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-cloud-storage-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 

  1   2   >