[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687607#comment-16687607
 ] 

Hudson commented on HADOOP-15926:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15430 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15430/])
HADOOP-15926. Document upgrading the section in NOTICE.txt when (aajisaka: rev 
66b1335bb3a9a6f3a3db455540c973d4a85bef73)
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md


> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #439: YARN-8833 fix compute shares may lock the scheduli...

2018-11-14 Thread yoelee
GitHub user yoelee opened a pull request:

https://github.com/apache/hadoop/pull/439

YARN-8833 fix compute shares may  lock the scheduling process

When compute fair share, there may be a chance triggering the problem of 
Integer overflow, and entering an infinite loop, which blocks the scheduling 
process.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yoelee/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/439.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #439


commit 39a6f7cab193be910bfb34265ceb696ddbd78da5
Author: liyakun.hit 
Date:   2018-11-15T07:28:34Z

YARN-8833 fix compute shares may  lock the scheduling process




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15926:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.2. Thanks [~dineshchitlangia] for the 
contribution!

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687586#comment-16687586
 ] 

Akira Ajisaka commented on HADOOP-15926:


+1

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687579#comment-16687579
 ] 

Hadoop QA commented on HADOOP-15926:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948267/HADOOP-15926.002.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 8b589022f0be 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15526/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687556#comment-16687556
 ] 

Hadoop QA commented on HADOOP-15922:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 93 unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
12s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948256/HADOOP-15922.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a49156d0113 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |

[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687553#comment-16687553
 ] 

Hadoop QA commented on HADOOP-14739:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
27s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-14739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948255/HADOOP-14739.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux db822977f070 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15522/testReport/ |
| Max. process+thread count | 318 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15522/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14739.001.patch
>
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687550#comment-16687550
 ] 

Hadoop QA commented on HADOOP-15928:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
15s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15928 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948258/HADOOP-15928.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c73a200f55b3 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15524/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15524/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 

[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687534#comment-16687534
 ] 

Dinesh Chitlangia commented on HADOOP-15926:


[~ajisakaa] - Thanks for catching that! Attached patch 002 that addresses 
review comments.

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15926:
---
Attachment: HADOOP-15926.002.patch

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch, HADOOP-15926.002.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15937:
--

 Summary: [JDK 11] Update maven-shade-plugin.version to 3.2.1
 Key: HADOOP-15937
 URL: https://issues.apache.org/jira/browse/HADOOP-15937
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


Build fails with the below error,
{code:xml}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
 entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
-> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
 entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
...
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-csi

{code}


Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15936) [JDK 11] MiniDFSClusterManager & MiniHadoopClusterManager compilation fails due to the usage of '_' as identifier

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15936:
--

 Summary: [JDK 11] MiniDFSClusterManager & MiniHadoopClusterManager 
compilation fails due to the usage of '_' as identifier
 Key: HADOOP-15936
 URL: https://issues.apache.org/jira/browse/HADOOP-15936
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


{code:xml}
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/test/MiniDFSClusterManager.java:[130,37]
 as of release 9, '_' is a keyword, and may not be used as an identifier
[INFO] 1 error
{code}

{code:xml}
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/MiniHadoopClusterManager.java:[140,37]
 as of release 9, '_' is a keyword, and may not be used as an identifier
[INFO] 1 error
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15935) [JDK 11] Update maven.plugin-tools.version to 3.6.0

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15935:
--

 Summary: [JDK 11] Update maven.plugin-tools.version to 3.6.0
 Key: HADOOP-15935
 URL: https://issues.apache.org/jira/browse/HADOOP-15935
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


Build fails with the below error,

{code:xml}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor 
(default-descriptor) on project hadoop-maven-plugins: Execution 
default-descriptor of goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor failed.: 
IllegalArgumentException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor 
(default-descriptor) on project hadoop-maven-plugins: Execution 
default-descriptor of goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor failed.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
...
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-maven-plugins

{code}


Updating maven.plugin-tools.version to 3.6.0 fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687525#comment-16687525
 ] 

Akira Ajisaka commented on HADOOP-15926:


I'm thinking it's better to place the sentence between

bq. Create a private git branch of trunk for JIRA, and in 
hadoop-project/pom.xml update the aws-java-sdk.version to the new SDK version.

and

bq. Do a clean build and rerun all the hadoop-aws tests, with and without the 
-Ds3guard -Ddynamodb options. This includes the -Pscale set, with a role 
defined for the assumed role tests. in fs.s3a.assumed.role.arn for testing 
assumed roles, and fs.s3a.server-side-encryption.key for encryption, for full 
coverage. If you can, scale up the scale tests.

because it's good to change the source code at once.

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12558) distcp documentation is woefully out of date

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687523#comment-16687523
 ] 

Hadoop QA commented on HADOOP-12558:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HADOOP-12558 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948260/HADOOP-12558.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15525/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Critical
>  Labels: newbie
> Attachments: HADOOP-12558.001.patch
>
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12558) distcp documentation is woefully out of date

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687517#comment-16687517
 ] 

Dinesh Chitlangia commented on HADOOP-12558:


[~arpitagarwal] , [~aw] - Attached patch 001 for your review. Thanks!

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Critical
>  Labels: newbie
> Attachments: HADOOP-12558.001.patch
>
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12558) distcp documentation is woefully out of date

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-12558:
---
Attachment: HADOOP-12558.001.patch
Status: Patch Available  (was: Reopened)

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Critical
>  Labels: newbie
> Attachments: HADOOP-12558.001.patch
>
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Status: Patch Available  (was: In Progress)

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Attachment: HADOOP-15928.002.patch

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch, HADOOP-15928.002.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15928:
--
Status: In Progress  (was: Patch Available)

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687495#comment-16687495
 ] 

Pranay Singh commented on HADOOP-15928:
---

Thanks [~xiaochen] for the review, I'll update the comment.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-14 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687481#comment-16687481
 ] 

He Xiaoqiao commented on HADOOP-15922:
--

submit v002 patch with unittest and trigger jenkins again.
Hi, [~ste...@apache.org] would you help me to review this patch?

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687480#comment-16687480
 ] 

Hadoop QA commented on HADOOP-14784:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
12s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-14784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882490/HADOOP-14784.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c1dfc368fe95 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15521/testReport/ |
| Max. process+thread count | 444 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15521/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [KMS] Improve 

[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-14 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15922:
-
Attachment: HADOOP-15922.002.patch

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-14739:
---
Attachment: HADOOP-14739.001.patch
Status: Patch Available  (was: Open)

[~ajisakaa] san - Attached patch 001 for your review. Thanks!

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14739.001.patch
>
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687460#comment-16687460
 ] 

Dinesh Chitlangia commented on HADOOP-15926:


[~ajisakaa] - Attached patch 001 for your review. Thanks!

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HADOOP-14739:
--

Assignee: Dinesh Chitlangia

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687458#comment-16687458
 ] 

Dinesh Chitlangia commented on HADOOP-14739:


[~ajisakaa] +1 to that thought. This will simplify it for everyone. I will post 
a patch later this week.

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12558) distcp documentation is woefully out of date

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687456#comment-16687456
 ] 

Dinesh Chitlangia commented on HADOOP-12558:


>From my initial investigation, the only option not documented is {{xtrack}} : 
>Save information about missing source files to the specified directory.

I will post a patch later this week.

> distcp documentation is woefully out of date
> 
>
> Key: HADOOP-12558
> URL: https://issues.apache.org/jira/browse/HADOOP-12558
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools/distcp
>Reporter: Allen Wittenauer
>Assignee: Dinesh Chitlangia
>Priority: Critical
>  Labels: newbie
>
> There are a ton of distcp tune-ables that have zero documentation outside of 
> the source code.  This should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687455#comment-16687455
 ] 

Akira Ajisaka commented on HADOOP-14739:


Now I'm thinking boot2docker specific part can be removed in 
{{./start-build-env.sh}} and BUILDING.txt can be changed from
{noformat}
On Linux:
Install Docker and run this command:

$ ./start-build-env.sh

On Mac:
First make sure Virtualbox and docker toolbox are installed.
You can use docker toolbox as described in 
http://docs.docker.com/mac/step_one/.
$ docker-machine create --driver virtualbox \
--virtualbox-memory "4096" hadoopdev
$ eval $(docker-machine env hadoopdev)
$ ./start-build-env.sh
{noformat}
to
{noformat}
Install Docker and run this command:

$ ./start-build-env.sh
{noformat}

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687454#comment-16687454
 ] 

Hadoop QA commented on HADOOP-15926:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15926 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948249/HADOOP-15926.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 636c29e02ce7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15520/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687451#comment-16687451
 ] 

Xiao Chen commented on HADOOP-15928:


Thanks for working on this Pranay. Code seems fine. I can't understand this 
part of the comment - could you update it?
{code}
 //..., it means that this is the test direct read
 // that is called from hdfsOpenFIleImpl().
{code}

I don't have better ideas of unit testing this, and can live with the manual 
test. Please move the jira to HDFS as Steve suggested, by clicking {{More}} 
then {{Move}}.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14739) Add build instruction for docker for Mac instead of docker toolbox.

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687446#comment-16687446
 ] 

Dinesh Chitlangia commented on HADOOP-14739:


[~elek] , [~ajisakaa] - Do we still need to add this in the documentation?

I suggest we can add it as a sub-section under 'On Mac'.

 

> Add build instruction for docker for Mac instead of docker toolbox.
> ---
>
> Key: HADOOP-14739
> URL: https://issues.apache.org/jira/browse/HADOOP-14739
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> HADOOP-12575 added build instruction for docker toolbox.
> Now Docker for Mac (https://www.docker.com/docker-mac) is available and it 
> can skip some procedures written in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15926:
---
Attachment: HADOOP-15926.001.patch
Status: Patch Available  (was: Open)

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-15926.001.patch
>
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14784) [KMS] Improve KeyAuthorizationKeyProvider#toString()

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687439#comment-16687439
 ] 

Dinesh Chitlangia commented on HADOOP-14784:


+1 LGTM

> [KMS] Improve KeyAuthorizationKeyProvider#toString()
> 
>
> Key: HADOOP-14784
> URL: https://issues.apache.org/jira/browse/HADOOP-14784
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14784.001.patch
>
>
> When KMS server starts, it loads KeyProviderCryptoExtension and print the 
> following message:
> {noformat}
> 2017-08-17 04:57:13,348 INFO 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp: Initialized 
> KeyProviderCryptoExtension EagerKeyGeneratorKeyProviderCryptoExtension: 
> KeyProviderCryptoExtension: CachingKeyProvider: 
> jceks://file@/var/lib/kms/kms.keystore
> {noformat}
> However, this is confusing as KeyAuthorizationKeyProvider is loaded by not 
> shown in this message. KeyAuthorizationKeyProvider#toString should be 
> improved, so that in addition to its internal provider, also print its own 
> class name when loaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687418#comment-16687418
 ] 

Hadoop QA commented on HADOOP-15925:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
1s{color} | {color:red} The patch generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15925 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948239/HADOOP-15925.001.patch
 |
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
| uname | Linux daf4142bc203 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / df5e863 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15519/artifact/out/diff-patch-shellcheck.txt
 |
| Max. process+thread count | 434 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15519/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15925.001.patch
>
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-14 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687391#comment-16687391
 ] 

Dinesh Chitlangia edited comment on HADOOP-15925 at 11/15/18 2:11 AM:
--

[~ajisakaa] san - Attached patch 001 for your review.

I chose the pattern as *{{*gpgagent.**}} to ensure we only retain the required 
conf and log files.


was (Author: dineshchitlangia):
[~ajisakaa] san - Attached patch 001 for your review.

I chose the pattern as {{*gpgagent.*}} to ensure we only retain the required 
conf and log files.

> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15925.001.patch
>
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15925:
---
Attachment: HADOOP-15925.001.patch
Status: Patch Available  (was: Open)

[~ajisakaa] san - Attached patch 001 for your review.

I chose the pattern as {{*gpgagent.*}} to ensure we only retain the required 
conf and log files.

> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15925.001.patch
>
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-14 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687388#comment-16687388
 ] 

Hudson commented on HADOOP-15930:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15429 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15429/])
HADOOP-15930. Exclude MD5 checksum files from release artifact. (aajisaka: rev 
df5e863fee544c9283e28a21c2788c008d7e3e04)
* (edit) dev-support/bin/create-release


> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 2.8.6, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HADOOP-15925:
--

Assignee: Dinesh Chitlangia

> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-14 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15930:
---
   Resolution: Fixed
Fix Version/s: 2.9.3
   3.2.1
   2.8.6
   3.1.2
   3.3.0
   3.0.4
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, branch-3.1, branch-3.0, branch-2, 
branch-2.9, and branch-2.8. Thanks [~ste...@apache.org] for reviewing the patch!

> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 2.8.6, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-14 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HADOOP-15926:
--

Assignee: Dinesh Chitlangia

> Document upgrading the section in NOTICE.txt when upgrading the version of 
> AWS SDK
> --
>
> Key: HADOOP-15926
> URL: https://issues.apache.org/jira/browse/HADOOP-15926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie
>
> Reported by [~ste...@apache.org]
> bq. Hadoop 3.2 + has a section in 
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
> what to do when updating the SDK...this needs to be added there. Anyone fancy 
> supplying a patch?
> https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15933:
--
Affects Version/s: 3.0.0

> Need for more stats in DFSClient
> 
>
> Key: HADOOP-15933
> URL: https://issues.apache.org/jira/browse/HADOOP-15933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15933:
--
Component/s: hdfs-client

> Need for more stats in DFSClient
> 
>
> Key: HADOOP-15933
> URL: https://issues.apache.org/jira/browse/HADOOP-15933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687270#comment-16687270
 ] 

Steve Loughran commented on HADOOP-15928:
-

Move the JIRA to HDFS, ask the team there what they expect.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15934) ABFS: make retry policy configurable

2018-11-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15934:
-
Affects Version/s: 3.2.0

> ABFS: make retry policy configurable
> 
>
> Key: HADOOP-15934
> URL: https://issues.apache.org/jira/browse/HADOOP-15934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Currently the retry policy parameter is hard coded, should make it 
> configurable for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15934) ABFS: make retry policy configurable

2018-11-14 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15934:
-
Component/s: fs/azure

> ABFS: make retry policy configurable
> 
>
> Key: HADOOP-15934
> URL: https://issues.apache.org/jira/browse/HADOOP-15934
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Currently the retry policy parameter is hard coded, should make it 
> configurable for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15934) ABFS: make retry policy configurable

2018-11-14 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-15934:


 Summary: ABFS: make retry policy configurable
 Key: HADOOP-15934
 URL: https://issues.apache.org/jira/browse/HADOOP-15934
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Da Zhou
Assignee: Da Zhou


Currently the retry policy parameter is hard coded, should make it configurable 
for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-11-14 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687202#comment-16687202
 ] 

Da Zhou commented on HADOOP-15872:
--

Hi Steve, 
{quote}if you run it now, everything fails with an error about headers : "The 
value for one of the HTTP headers is not in the correct format.".
{quote}
Reason: I checked the test tenant you used, it doesn't contain the new service 
change related to this patch. I also reran the tests with my test account, 
which contains new service change, all tests passed.
{quote}getFileStatus() fails on the cli (hadoop fs -ls; cloudstore store diag) 
As far as I can tell, this is a change in the behaviour in the ADLS endpoint
{quote}
Reason: abfs://stevel-testing@ACCOUNT.*blob.core.windows.net*/ was used, this 
is for WASB not ABFS. For ABFS you should use *dfs.core.windows.net*

> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15872-001.patch, HADOOP-15872-002.patch, 
> HADOOP-15872-003.patch
>
>
> This update to the latest REST version (2018-11-09) will make the following 
> changes to the ABFS driver:
> 1) The ABFS implementation of getFileStatus currently requires read 
> permission.  According to HDFS permissions guide, it should only require 
> execute on the parent folders (traversal access).  A new REST API has been 
> introduced in REST version "2018-11-09" of ADLS Gen 2 to fix this problem.
> 2) The new "2018-11-09" REST version introduces support to i) automatically 
> translate UPNs to OIDs when setting the owner, owning group, or ACL and ii) 
> optionally translate OIDs to UPNs in the responses when getting the owner, 
> owning group, or ACL.  Configuration will be introduced to optionally 
> translate OIDs to UPNs in the responses.  Since translation has a performance 
> impact, the default will be to perform no translation and return the OIDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687188#comment-16687188
 ] 

Steve Loughran commented on HADOOP-15933:
-

* can you move this to HDFS
* add version markers, component etc
* Hadoop 3.x added the StorageStatistics stats collection, all stuff should go 
in there. thanks

> Need for more stats in DFSClient
> 
>
> Key: HADOOP-15933
> URL: https://issues.apache.org/jira/browse/HADOOP-15933
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687183#comment-16687183
 ] 

Steve Loughran commented on HADOOP-15932:
-

I think what's happened here is that HADOOP-14432 contains an assumption "its a 
file" which turns out to be wrong: it doesn't handle directories.

Immediate patch: remove that subclass of copyFromLocal(); strip tests down. 
That's what I propose for 3.0-3.2 initially.

Longer term: if the source is a directory, kick off a recursive list and then 
submit the uploads.

To be more efficient: shuffle for less throttling, pick on largest files first. 
Which is what HADOOP-15364 demonstrated

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO 

[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15932:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15620

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
>   at 
> 

[jira] [Assigned] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HADOOP-15933:
-

Assignee: Pranay Singh

> Need for more stats in DFSClient
> 
>
> Key: HADOOP-15933
> URL: https://issues.apache.org/jira/browse/HADOOP-15933
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node we have HBase and 
> Impala running together. We can check the throughput of different operation 
> across client and isolate the problem caused because of noisy neighbor or 
> network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HADOOP-15933:
--
Description: 
The usage of HDFS has changed from being used as a map-reduce filesystem, now 
it's becoming more of like a general purpose filesystem. In most of the cases 
there are issues with the Namenode so we have metrics to know the workload or 
stress on Namenode.

However, there is a need to have more statistics collected for different 
operations/RPCs in DFSClient to know which RPC operations are taking longer 
time or to know what is the frequency of the operation.These statistics can be 
exposed to the users of DFS Client and they can periodically log or do some 
sort of flow control if the response is slow. This will also help to isolate 
HDFS issue in a mixed environment where on a node say we have Spark, HBase and 
Impala running together. We can check the throughput of different operation 
across client and isolate the problem caused because of noisy neighbor or 
network congestion or shared JVM.

We have dealt with several problems from the field for which there is no 
conclusive evidence as to what caused the problem. If we had metrics or stats 
in DFSClient we would be better equipped to solve such complex problems.

List of jiras for reference:
-
 HADOOP-15538 HADOOP-15530 ( client side deadlock)

  was:
The usage of HDFS has changed from being used as a map-reduce filesystem, now 
it's becoming more of like a general purpose filesystem. In most of the cases 
there are issues with the Namenode so we have metrics to know the workload or 
stress on Namenode.

However, there is a need to have more statistics collected for different 
operations/RPCs in DFSClient to know which RPC operations are taking longer 
time or to know what is the frequency of the operation.These statistics can be 
exposed to the users of DFS Client and they can periodically log or do some 
sort of flow control if the response is slow. This will also help to isolate 
HDFS issue in a mixed environment where on a node we have HBase and Impala 
running together. We can check the throughput of different operation across 
client and isolate the problem caused because of noisy neighbor or network 
congestion or shared JVM.

We have dealt with several problems from the field for which there is no 
conclusive evidence as to what caused the problem. If we had metrics or stats 
in DFSClient we would be better equipped to solve such complex problems.

List of jiras for reference:
-
 HADOOP-15538 HADOOP-15530 ( client side deadlock)


> Need for more stats in DFSClient
> 
>
> Key: HADOOP-15933
> URL: https://issues.apache.org/jira/browse/HADOOP-15933
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)
Pranay Singh created HADOOP-15933:
-

 Summary: Need for more stats in DFSClient
 Key: HADOOP-15933
 URL: https://issues.apache.org/jira/browse/HADOOP-15933
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Pranay Singh


The usage of HDFS has changed from being used as a map-reduce filesystem, now 
it's becoming more of like a general purpose filesystem. In most of the cases 
there are issues with the Namenode so we have metrics to know the workload or 
stress on Namenode.

However, there is a need to have more statistics collected for different 
operations/RPCs in DFSClient to know which RPC operations are taking longer 
time or to know what is the frequency of the operation.These statistics can be 
exposed to the users of DFS Client and they can periodically log or do some 
sort of flow control if the response is slow. This will also help to isolate 
HDFS issue in a mixed environment where on a node we have HBase and Impala 
running together. We can check the throughput of different operation across 
client and isolate the problem caused because of noisy neighbor or network 
congestion or shared JVM.

We have dealt with several problems from the field for which there is no 
conclusive evidence as to what caused the problem. If we had metrics or stats 
in DFSClient we would be better equipped to solve such complex problems.

List of jiras for reference:
-
 HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687009#comment-16687009
 ] 

Pranay Singh commented on HADOOP-15928:
---

Did some investigation around libhdfs tests, currently none of those tests use 
DummyFileSystem, so it will involve creating a setup for DummyFileSystem, 
wouldn't a manual testing be sufficient to test this fix.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15928) libhdfs logs errors when opened FS doesn't support ByteBufferReadable

2018-11-14 Thread Pranay Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687009#comment-16687009
 ] 

Pranay Singh edited comment on HADOOP-15928 at 11/14/18 7:16 PM:
-

Did some investigation around libhdfs tests, currently none of those tests use 
DummyFileSystem, so it will involve creating a setup for DummyFileSystem, 
wouldn't  manual testing be sufficient to test this fix.


was (Author: pranay_singh):
Did some investigation around libhdfs tests, currently none of those tests use 
DummyFileSystem, so it will involve creating a setup for DummyFileSystem, 
wouldn't a manual testing be sufficient to test this fix.

> libhdfs logs errors when opened FS doesn't support ByteBufferReadable
> -
>
> Key: HADOOP-15928
> URL: https://issues.apache.org/jira/browse/HADOOP-15928
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Major
>  Labels: libhdfs
> Fix For: 3.0.3
>
> Attachments: HADOOP-15928.001.patch
>
>
> Problem:
> 
> There is excessive error logging when a file is opened by libhdfs 
> (DFSClient/HDFS) in S3 environment, this issue is caused because buffered 
> read is not supported in S3 environment, HADOOP-14603 "S3A input stream to 
> support ByteBufferReadable"  
> The following message is printed repeatedly in the error log/ to STDERR:
> --
> UnsupportedOperationException: Byte-buffer read unsupported by input 
> streamjava.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream
> at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:150)
> Root cause
> 
> After investigating the issue, it appears that the above exception is printed 
> because
> when a file is opened via hdfsOpenFileImpl() calls readDirect() which is 
> hitting this
> exception.
> Fix:
> 
> Since the hdfs client is not initiating the byte buffered read but is 
> happening in a implicit manner, we should not be generating the error log 
> during open of a file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HADOOP-15932:

Component/s: fs/s3
 fs

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
>   at 
> 

[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HADOOP-15932:

Description: 
Oozie server unable to start cause of below exception.
s3a expects a file to copy it in store but sharelib is a folder containing all 
the needed components jars.
Hence throws the exception :
_Not a file: /usr/hdp/current/oozie-server/share/lib_

{code:java}
[oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating Configuration
the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
log4j:WARN No appenders could be found for logger 
(org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot locate 
configuration: tried 
hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Scheduled 
Metric snapshot period at 10 second(s).
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
s3a-file-system metrics system started
2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
fs.s3a.server-side-encryption-key is deprecated. Instead, use 
fs.s3a.server-side-encryption.key

Error: Not a file: /usr/hdp/current/oozie-server/share/lib

Stack trace for the error was (for debug purposes):
--
java.io.FileNotFoundException: Not a file: 
/usr/hdp/current/oozie-server/share/lib
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:2339)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2386)
at 
org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:182)
at 
org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:67)
--

2268 [pool-2-thread-1] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
Stopping s3a-file-system metrics system...
2268 [pool-2-thread-1] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
s3a-file-system metrics system stopped.
2268 

[jira] [Updated] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Soumitra Sulav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumitra Sulav updated HADOOP-15932:

Affects Version/s: 3.0.0

> Oozie unable to create sharelib in s3a filesystem
> -
>
> Key: HADOOP-15932
> URL: https://issues.apache.org/jira/browse/HADOOP-15932
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Soumitra Sulav
>Priority: Critical
>
> Oozie server unable to start cause of below exception.
> s3a expects a file to copy it in store but sharelib is a folder containing 
> all the needed components jars.
> Hence throws the exception :
> _Not a file: /usr/hdp/current/oozie-server/share/lib_
> {code:java}
> [oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
> create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
>   setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
>   setting 
> CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
>   setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
>   setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
>   setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
>   setting JRE_HOME=${JAVA_HOME}
>   setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
>   setting OOZIE_LOG=/var/log/oozie
>   setting CATALINA_PID=/var/run/oozie/oozie.pid
>   setting OOZIE_DATA=/hadoop/oozie/data
>   setting OOZIE_HTTP_PORT=11000
>   setting OOZIE_ADMIN_PORT=11001
>   setting 
> JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
>   setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
> -Doozie.connection.retry.count=5 "
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
> 518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
> 619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating 
> Configuration
> the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
> log4j:WARN No appenders could be found for logger 
> (org.apache.htrace.core.Tracer).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> 1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot 
> locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> Scheduled Metric snapshot period at 10 second(s).
> 1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
> s3a-file-system metrics system started
> 2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
> fs.s3a.server-side-encryption-key is deprecated. Instead, use 
> fs.s3a.server-side-encryption.key
> Error: Not a file: /usr/hdp/current/oozie-server/share/lib
> Stack trace for the error was (for debug purposes):
> --
> java.io.FileNotFoundException: Not a file: 
> /usr/hdp/current/oozie-server/share/lib
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
>   at 
> 

[jira] [Created] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HADOOP-15932:
---

 Summary: Oozie unable to create sharelib in s3a filesystem
 Key: HADOOP-15932
 URL: https://issues.apache.org/jira/browse/HADOOP-15932
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Soumitra Sulav


Oozie server unable to start cause of below exception.
s3a expects a file to copy it in store but sharelib is a folder containing all 
the needed components jars.
Hence throws the exception :
_Not a file: /usr/hdp/current/oozie-server/share/lib_

{code:java}
[oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating Configuration
the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
log4j:WARN No appenders could be found for logger 
(org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot locate 
configuration: tried 
hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Scheduled 
Metric snapshot period at 10 second(s).
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
s3a-file-system metrics system started
2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
fs.s3a.server-side-encryption-key is deprecated. Instead, use 
fs.s3a.server-side-encryption.key

Error: Not a file: /usr/hdp/current/oozie-server/share/lib

Stack trace for the error was (for debug purposes):
--
java.io.FileNotFoundException: Not a file: 
/usr/hdp/current/oozie-server/share/lib
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:2339)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2386)
at 
org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:182)
at 
org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:67)
--

2268 [pool-2-thread-1] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
Stopping s3a-file-system metrics system...

[jira] [Commented] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686438#comment-16686438
 ] 

Hadoop QA commented on HADOOP-15918:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m  3s{color} 
| {color:red} root generated 4 new + 1449 unchanged - 0 fixed = 1453 total (was 
1449) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 126 unchanged - 0 fixed = 127 total (was 126) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.util.TestBasicDiskValidator |
|   | hadoop.util.TestDiskCheckerWithDiskIo |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15918 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948108/HADOOP-15918.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 5f9a1fc60c16 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a948281 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15518/artifact/out/diff-compile-javac-root.txt
 |
| 

[jira] [Comment Edited] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686400#comment-16686400
 ] 

Steve Loughran edited comment on HADOOP-15930 at 11/14/18 11:38 AM:


+1


was (Author: ste...@apache.org):
+!

> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-14 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686400#comment-16686400
 ] 

Steve Loughran commented on HADOOP-15930:
-

+!

> Exclude MD5 checksum files from release artifact
> 
>
> Key: HADOOP-15930
> URL: https://issues.apache.org/jira/browse/HADOOP-15930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15930.01.patch
>
>
> create-release script creates md5 checksum files, but now md5 checksum is 
> useless.
> https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15908) hadoop-build-tools jar is downloaded from remote repository instead of using from local

2018-11-14 Thread Oleksandr Shevchenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Shevchenko reassigned HADOOP-15908:
-

Assignee: Oleksandr Shevchenko

> hadoop-build-tools jar is downloaded from remote repository instead of using 
> from local
> ---
>
> Key: HADOOP-15908
> URL: https://issues.apache.org/jira/browse/HADOOP-15908
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Minor
> Attachments: HADOOP-15908.001.patch, HADOOP-15908.002.patch
>
>
> HADOOP-12893 added "maven-remote-resources-plugin" to hadoop-project/pom.xml 
> to verify LICENSE.txt and NOTICE.txt files which includes 
> "hadoop-build-tools" remote resource bundles. 
> {code}
> 
>  org.apache.maven.plugins
>  maven-remote-resources-plugin
>  ${maven-remote-resources-plugin.version}
>  
>  
>  
> org.apache.hadoop:hadoop-build-tools:${hadoop.version}
>  
>  
>  
>  
>  org.apache.hadoop
>  hadoop-build-tools
>  ${hadoop.version}
>  
>  
>  
>  
>  
>  process
>  
>  
>  
>  
> {code}
> If we build only some module we always download " hadoop-build-tools" from 
> maven repository.
> For example run:
> cd hadoop-common-project/
> mvn test
> Then we will get the following output:
> {noformat}
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
> hadoop-annotations ---
> Downloading from apache.snapshots: 
> http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml
> Downloaded from apache.snapshots: 
> http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml
>  (791 B at 684 B/s)
> Downloading from apache.snapshots: 
> http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml
> Downloaded from apache.snapshots: 
> http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-main/3.3.0-SNAPSHOT/maven-metadata.xml
>  (609 B at 547 B/s)
> Downloading from apache.snapshots.https: 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml
> Downloaded from apache.snapshots.https: 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/maven-metadata.xml
>  (791 B at 343 B/s)
> Downloading from apache.snapshots.https: 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar
> Downloaded from apache.snapshots.https: 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-build-tools/3.3.0-SNAPSHOT/hadoop-build-tools-3.3.0-20181022.232020-179.jar
>  (0 B at 0 B/s)
> {noformat}
> If "hadoop-build-tools" jar doesn't exist in maven repository (for example we 
> try to build new version locally before repository will be created ) we can't 
> build some module:
> For example run:
> cd hadoop-common-project/
> mvn test
> Then we will get the following output:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hadoop-annotations: Execution default of goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed: 
> Plugin org.apache.maven.plugins:maven-remote-resources-plugin:1.5 or one of 
> its dependencies could not be resolved: Failure to find 
> org.apache.hadoop:hadoop-build-tools:jar:3.2.0 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {noformat}
> Therefore, we need to limit execution of the Remote Resources Plugin only in 
> the root directory in which the build was run.
> To accomplish this, we can use the "runOnlyAtExecutionRoot parameter"
> From maven documentation 
> http://maven.apache.org/plugins/maven-remote-resources-plugin/usage.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8842) local file system behavior of mv into an empty directory is inconsistent with HDFS

2018-11-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8842.

Resolution: Won't Fix

the semantics of rename() are complex and in places the hadoop FS APIs and the 
hadoop fs -mv command are wrong. Don't think we can fix, though if someone were 
to add/extend that fs shell's mv command then we could change the UI

> local file system behavior of mv into an empty directory is inconsistent with 
> HDFS
> --
>
> Key: HADOOP-8842
> URL: https://issues.apache.org/jira/browse/HADOOP-8842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Julien Le Dem
>Priority: Major
>
> moving into an empty directory replaces the directory instead.
> See output of attached script to reproduce :
> repro.sh
> {noformat}
> rm -rf local_fs_bug
> mkdir local_fs_bug
> hdfs -rmr local_fs_bug
> hdfs -mkdir local_fs_bug
> echo ">>> HDFS: normal behavior"
> touch part-
> hdfs -mkdir local_fs_bug/a
> hdfs -copyFromLocal part- local_fs_bug/a
> hdfs -mkdir local_fs_bug/b
> hdfs -mkdir local_fs_bug/b/c
> echo "content of a: 1 part"
> hdfs -ls local_fs_bug/a
> echo "content of b/c: empty"
> hdfs -ls local_fs_bug/b/c
> echo "mv a b/c"
> hdfs -mv local_fs_bug/a local_fs_bug/b/c
> echo "resulting content of b/c"
> hdfs -ls local_fs_bug/b/c
> echo "a is moved inside of c"
> echo
> echo ">>> local fs: bug"
> mkdir -p local_fs_bug/a
> touch local_fs_bug/a/part-
> mkdir -p local_fs_bug/b/c
> echo "content of a: 1 part"
> hdfs -fs local -ls local_fs_bug/a
> echo "content of b/c: empty"
> hdfs -fs local -ls local_fs_bug/b/c
> echo "mv a b/c"
> hdfs -fs local -mv local_fs_bug/a local_fs_bug/b/c
> echo "resulting content of b/c"
> hdfs -fs local -ls local_fs_bug/b/c
> echo "bug: a replaces c"
> echo
> echo ">>> but it works if the destination is not empty"
> mkdir local_fs_bug/a2
> touch local_fs_bug/a2/part-
> mkdir -p local_fs_bug/b2/c2
> touch local_fs_bug/b2/c2/dummy
> echo "content of a2: 1 part"
> hdfs -fs local -ls local_fs_bug/a2
> echo "content of b2/c2: 1 dummy file"
> hdfs -fs local -ls local_fs_bug/b2/c2
> echo "mv a2 b2/c2"
> hdfs -fs local -mv local_fs_bug/a2 local_fs_bug/b2/c2
> echo "resulting content of b/c"
> hdfs -fs local -ls local_fs_bug/b2/c2
> echo "a2 is moved inside of c2"
> {noformat}
> Output:
> {noformat}
> >>> HDFS: normal behavior
> content of a: 1 part
> Found 1 items
> -rw-r--r--   3 julien g  0 2012-09-25 17:16 
> /user/julien/local_fs_bug/a/part-
> content of b/c: empty
> mv a b/c
> resulting content of b/c
> Found 1 items
> drwxr-xr-x   - julien g  0 2012-09-25 17:16 
> /user/julien/local_fs_bug/b/c/a
> a is moved inside of c
> >>> local fs: bug
> content of a: 1 part
> 12/09/25 17:16:34 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/a/part-
> content of b/c: empty
> 12/09/25 17:16:34 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> mv a b/c
> 12/09/25 17:16:35 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> resulting content of b/c
> 12/09/25 17:16:35 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b/c/part-
> bug: a replaces c
> >>> but it works if the destination is not empty
> content of a2: 1 part
> 12/09/25 17:16:36 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/a2/part-
> content of b2/c2: 1 dummy file
> 12/09/25 17:16:37 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/dummy
> mv a2 b2/c2
> 12/09/25 17:16:37 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> resulting content of b/c
> 12/09/25 17:16:38 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 2 items
> drwxr-xr-x   - julien g   4096 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/a2
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/dummy
> a2 is moved inside of c2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-14 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686330#comment-16686330
 ] 

wujinhu commented on HADOOP-15919:
--

[~Sammi]  [~ste...@apache.org] Please help to review this patch, thanks :)

I will fix the code style issue later.

> AliyunOSS: Enable Yarn to use OSS
> -
>
> Key: HADOOP-15919
> URL: https://issues.apache.org/jira/browse/HADOOP-15919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15919.001.patch, HADOOP-15919.002.patch
>
>
> Uses DelegateToFileSystem to expose AliyunOSSFileSystem as an 
> AbstractFileSystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686317#comment-16686317
 ] 

Hadoop QA commented on HADOOP-15919:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948099/HADOOP-15919.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 6a45336ffe23 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3fade86 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15517/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15517/testReport/ |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun 

[jira] [Commented] (HADOOP-15931) support 'hadoop key create' with user specified key material

2018-11-14 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16686261#comment-16686261
 ] 

Hadoop QA commented on HADOOP-15931:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m 
44s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m  
5s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m  5s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 22 unchanged - 0 fixed = 23 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 36s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.io.nativeio.TestNativeIO |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15931 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12948089/HADOOP-15931-01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a8ac3b06d7af 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3fade86 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15516/artifact/out/branch-mvninstall-root.txt
 |
| findbugs | v3.1.0-RC1 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15516/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15516/artifact/out/patch-compile-root.txt
 |
| checkstyle | 

[jira] [Updated] (HADOOP-15918) Namenode gets stuck when deleting large dir in trash

2018-11-14 Thread Tao Jie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15918:
-
Attachment: HADOOP-15918.002.patch

> Namenode gets stuck when deleting large dir in trash
> 
>
> Key: HADOOP-15918
> URL: https://issues.apache.org/jira/browse/HADOOP-15918
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.2, 3.1.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Major
> Attachments: HADOOP-15918.001.patch, HADOOP-15918.002.patch, 
> HDFS-13769.001.patch, HDFS-13769.002.patch, HDFS-13769.003.patch, 
> HDFS-13769.004.patch
>
>
> Similar to the situation discussed in HDFS-13671, Namenode gets stuck for a 
> long time when deleting trash dir with a large mount of data. We found log in 
> namenode:
> {quote}
> 2018-06-08 20:00:59,042 INFO namenode.FSNamesystem 
> (FSNamesystemLock.java:writeUnlock(252)) - FSNamesystem write lock held for 
> 23018 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1033)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:254)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1567)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:2820)
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1047)
> {quote}
> One simple solution is to avoid deleting large data in one delete RPC call. 
> We implement a trashPolicy that divide the delete operation into several 
> delete RPCs, and each single deletion would not delete too many files.
> Any thought? [~linyiqun]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15917) AliyunOSS: fix incorrect ReadOps and WriteOps in statistics

2018-11-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15917:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> AliyunOSS: fix incorrect ReadOps and WriteOps in statistics
> ---
>
> Key: HADOOP-15917
> URL: https://issues.apache.org/jira/browse/HADOOP-15917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.1.2, 3.2.1, 2.9.3
>
> Attachments: HADOOP-15917.001.patch, HADOOP-15917.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15919) AliyunOSS: Enable Yarn to use OSS

2018-11-14 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15919:
-
Attachment: HADOOP-15919.002.patch

> AliyunOSS: Enable Yarn to use OSS
> -
>
> Key: HADOOP-15919
> URL: https://issues.apache.org/jira/browse/HADOOP-15919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15919.001.patch, HADOOP-15919.002.patch
>
>
> Uses DelegateToFileSystem to expose AliyunOSSFileSystem as an 
> AbstractFileSystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org