[jira] [Commented] (HADOOP-15899) Update AWS Java SDK versions in NOTICE.txt

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684801#comment-16684801
 ] 

Akira Ajisaka commented on HADOOP-15899:


Nice catch, [~ste...@apache.org]. Filed HADOOP-15926.

> Update AWS Java SDK versions in NOTICE.txt
> --
>
> Key: HADOOP-15899
> URL: https://issues.apache.org/jira/browse/HADOOP-15899
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15899-branch-2-01.patch, 
> HADOOP-15899-branch-3.1-01.patch, HADOOP-15899.01.patch
>
>
>  
> The version of AWS Java SDK documented in NOTICE.txt and the version bundled 
> in binary tarball are different in the most of branches.
> || ||pom.xml||NOTICE.txt||
> |trunk|1.11.375|1.11.134|
> |branch-3.2|1.11.375|1.11.134|
> |branch-3.1|1.11.271|1.11.134|
> |branch-3.0|1.11.271|1.11.134|
> |branch-2|1.11.199|1.10.6|
> |branch-2.9|1.11.199|1.10.6|
> |branch-2.8|1.10.6|1.10.6|
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15926) Document upgrading the section in NOTICE.txt when upgrading the version of AWS SDK

2018-11-12 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15926:
--

 Summary: Document upgrading the section in NOTICE.txt when 
upgrading the version of AWS SDK
 Key: HADOOP-15926
 URL: https://issues.apache.org/jira/browse/HADOOP-15926
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Akira Ajisaka


Reported by [~ste...@apache.org]

bq. Hadoop 3.2 + has a section in 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md about 
what to do when updating the SDK...this needs to be added there. Anyone fancy 
supplying a patch?

https://issues.apache.org/jira/browse/HADOOP-15899?focusedCommentId=16675121&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675121



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15912) start-build-env.sh still creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684732#comment-16684732
 ] 

Hudson commented on HADOOP-15912:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15416 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15416/])
HADOOP-15912. start-build-env.sh still creates an invalid (tasanuma: rev 
a67642c3776156ee941f12f9481160c729c56027)
* (edit) start-build-env.sh


> start-build-env.sh still creates an invalid 
> /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802
> 
>
> Key: HADOOP-15912
> URL: https://issues.apache.org/jira/browse/HADOOP-15912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: CentOS 7.5 and Docker 1.13.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15912.01.patch
>
>
> {noformat}
> RUN echo -e "${USER_NAME}\tALL=NOPASSWD: ALL" > 
> "/etc/sudoers.d/hadoop-build-${USER_ID}"
> {noformat}
> creates
> {noformat}
> -e ALL=NOPASSWD: ALL
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15925:
---
 Labels: newbie  (was: )
Component/s: build

> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15912) start-build-env.sh still creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684730#comment-16684730
 ] 

Akira Ajisaka commented on HADOOP-15912:


Thank you, [~jonBoone] and [~tasanuma0829]!

> start-build-env.sh still creates an invalid 
> /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802
> 
>
> Key: HADOOP-15912
> URL: https://issues.apache.org/jira/browse/HADOOP-15912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: CentOS 7.5 and Docker 1.13.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15912.01.patch
>
>
> {noformat}
> RUN echo -e "${USER_NAME}\tALL=NOPASSWD: ALL" > 
> "/etc/sudoers.d/hadoop-build-${USER_ID}"
> {noformat}
> creates
> {noformat}
> -e ALL=NOPASSWD: ALL
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684727#comment-16684727
 ] 

Akira Ajisaka commented on HADOOP-15925:


{{git clean --exclude=}} option can ignore such directory.

> The config and log of gpg-agent are removed in create-release script
> 
>
> Key: HADOOP-15925
> URL: https://issues.apache.org/jira/browse/HADOOP-15925
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> The config file and log file of gpg-agent are located at {{patchprocess}} 
> directory, and then, {{git clean -xdf}} removes the directory. That way the 
> config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15912) start-build-env.sh still creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802

2018-11-12 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15912:
--
   Resolution: Fixed
Fix Version/s: 3.2.1
   3.1.2
   3.3.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.2 and branch-3.1. Thanks!

> start-build-env.sh still creates an invalid 
> /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802
> 
>
> Key: HADOOP-15912
> URL: https://issues.apache.org/jira/browse/HADOOP-15912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: CentOS 7.5 and Docker 1.13.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15912.01.patch
>
>
> {noformat}
> RUN echo -e "${USER_NAME}\tALL=NOPASSWD: ALL" > 
> "/etc/sudoers.d/hadoop-build-${USER_ID}"
> {noformat}
> creates
> {noformat}
> -e ALL=NOPASSWD: ALL
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15925) The config and log of gpg-agent are removed in create-release script

2018-11-12 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15925:
--

 Summary: The config and log of gpg-agent are removed in 
create-release script
 Key: HADOOP-15925
 URL: https://issues.apache.org/jira/browse/HADOOP-15925
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


The config file and log file of gpg-agent are located at {{patchprocess}} 
directory, and then, {{git clean -xdf}} removes the directory. That way the 
config and log of gpg-agent are lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15923) create-release script should set max-cache-ttl as well as default-cache-ttl for gpg-agent

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684722#comment-16684722
 ] 

Hudson commented on HADOOP-15923:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15415 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15415/])
HADOOP-15923. create-release script should set max-cache-ttl as well as 
(aajisaka: rev 703b2860a49577629e7b3ef461d8a61292e79c88)
* (edit) dev-support/bin/create-release


> create-release script should set max-cache-ttl as well as default-cache-ttl 
> for gpg-agent
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6, 3.2.1
>
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15912) start-build-env.sh still creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802

2018-11-12 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684721#comment-16684721
 ] 

Takanobu Asanuma commented on HADOOP-15912:
---

+1. Thanks [~ajisakaa] for the contribution, and [~jonBoone] for the review. I 
will commit it soon.

> start-build-env.sh still creates an invalid 
> /etc/sudoers.d/hadoop-build-${USER_ID} file entry after HADOOP-15802
> 
>
> Key: HADOOP-15912
> URL: https://issues.apache.org/jira/browse/HADOOP-15912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: CentOS 7.5 and Docker 1.13.1
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15912.01.patch
>
>
> {noformat}
> RUN echo -e "${USER_NAME}\tALL=NOPASSWD: ALL" > 
> "/etc/sudoers.d/hadoop-build-${USER_ID}"
> {noformat}
> creates
> {noformat}
> -e ALL=NOPASSWD: ALL
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl as well as default-cache-ttl for gpg-agent

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
   Resolution: Fixed
Fix Version/s: 3.2.1
   2.8.6
   3.1.2
   3.3.0
   3.0.4
   2.9.2
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, branch-3.1, branch-3.0, branch-2, 
branch-2.9, branch-2.9.2, and branch-2.8. Thanks [~tasanuma0829] for the review!

> create-release script should set max-cache-ttl as well as default-cache-ttl 
> for gpg-agent
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2, 2.8.6, 3.2.1
>
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15923) create-release script should set max-cache-ttl as well as default-cache-ttl for gpg-agent

2018-11-12 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684705#comment-16684705
 ] 

Takanobu Asanuma commented on HADOOP-15923:
---

Thanks for catching the issue, [~ajisakaa]. +1.

> create-release script should set max-cache-ttl as well as default-cache-ttl 
> for gpg-agent
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684602#comment-16684602
 ] 

Hadoop QA commented on HADOOP-15924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 30 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.s3a.commit.staging.TestStagingCommitter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15924 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947922/HADOOP-15924.00.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux a33bc56970f2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15509/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15509/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Bui

[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684558#comment-16684558
 ] 

Bharat Viswanadham commented on HADOOP-15924:
-

Marking this as Patch Available to get Jenkins run on this.

> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15924:

Status: Patch Available  (was: Open)

> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684542#comment-16684542
 ] 

Bharat Viswanadham commented on HADOOP-15924:
-

not ran test suite against aws s3 endpoint, Just ran tests against s3 gateway 
endpoint to see if any CNFE errors we get. (Not seen any CNFE errors)

> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15924:

Attachment: HADOOP-15924.00.patch

> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15924:

Description: 
Issue is hadoop-aws cannot be used with shaded jars.

The recommended client side jars for hadoop 3 are client-api/runtime shaded 
jars.
They shade guava etc. So something like SemaphoredDelegatingExecutor refers to 
shaded guava classes.

hadoop-aws has S3AFileSystem implementation which refers to 
SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
constructor. When S3AFileSystem is created then it uses the hadoop-api jar and 
finds SemaphoredDelegatingExecutor but not the right constructor because in 
client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava 
ListenerService.

So essentially none of the aws/azure/adl hadoop FS implementations will work 
with the shaded Hadoop client runtime jars.

 

This Jira is created to track the work required to make hadoop-aws work with 
hadoop shaded client jars.

 

The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this 
way, we shall not see the issue. Currently, hadoop-aws depends on 
aws-sdk-bundle and all other remaining jars are provided dependencies.

 

cc [~steve_l]

 

 

  was:
Issue is hadoop-aws cannot be used with shaded jars.

The recommended client side jars for hadoop 3 are client-api/runtime shaded 
jars.
They shade guava etc. So something like SemaphoredDelegatingExecutor refers to 
shaded guava classes.

hadoop-aws has S3AFileSystem implementation which refers to 
SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
constructor. When S3AFileSystem is created then it uses the hadoop-api jar and 
finds SemaphoredDelegatingExecutor but not the right constructor because in 
client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava 
ListenerService.

So essentially none of the aws/azure/adl hadoop FS implementations will work 
with the shaded Hadoop client runtime jars.

 

This Jira is created to track the work required to make hadoop-aws work with 
hadoop shaded client jars.

 

The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this 
way, we shall not see the issue. Currently, hadoop-aws depends on 
aws-sdk-bundle and all other remaining jars are provided dependencies.

 

 


> Hadoop aws cannot be used with shaded jars
> --
>
> Key: HADOOP-15924
> URL: https://issues.apache.org/jira/browse/HADOOP-15924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
> Attachments: HADOOP-15924.00.patch
>
>
> Issue is hadoop-aws cannot be used with shaded jars.
> The recommended client side jars for hadoop 3 are client-api/runtime shaded 
> jars.
> They shade guava etc. So something like SemaphoredDelegatingExecutor refers 
> to shaded guava classes.
> hadoop-aws has S3AFileSystem implementation which refers to 
> SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
> constructor. When S3AFileSystem is created then it uses the hadoop-api jar 
> and finds SemaphoredDelegatingExecutor but not the right constructor because 
> in client-api jar SemaphoredDelegatingExecutor constructor has the shaded 
> guava ListenerService.
> So essentially none of the aws/azure/adl hadoop FS implementations will work 
> with the shaded Hadoop client runtime jars.
>  
> This Jira is created to track the work required to make hadoop-aws work with 
> hadoop shaded client jars.
>  
> The solution for this can be, hadoop-aws depends on hadoop shaded jars. In 
> this way, we shall not see the issue. Currently, hadoop-aws depends on 
> aws-sdk-bundle and all other remaining jars are provided dependencies.
>  
> cc [~steve_l]
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15924) Hadoop aws cannot be used with shaded jars

2018-11-12 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15924:
---

 Summary: Hadoop aws cannot be used with shaded jars
 Key: HADOOP-15924
 URL: https://issues.apache.org/jira/browse/HADOOP-15924
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham


Issue is hadoop-aws cannot be used with shaded jars.

The recommended client side jars for hadoop 3 are client-api/runtime shaded 
jars.
They shade guava etc. So something like SemaphoredDelegatingExecutor refers to 
shaded guava classes.

hadoop-aws has S3AFileSystem implementation which refers to 
SemaphoredDelegatingExecutor with unshaded guava ListeningService in the 
constructor. When S3AFileSystem is created then it uses the hadoop-api jar and 
finds SemaphoredDelegatingExecutor but not the right constructor because in 
client-api jar SemaphoredDelegatingExecutor constructor has the shaded guava 
ListenerService.

So essentially none of the aws/azure/adl hadoop FS implementations will work 
with the shaded Hadoop client runtime jars.

 

This Jira is created to track the work required to make hadoop-aws work with 
hadoop shaded client jars.

 

The solution for this can be, hadoop-aws depends on hadoop shaded jars. In this 
way, we shall not see the issue. Currently, hadoop-aws depends on 
aws-sdk-bundle and all other remaining jars are provided dependencies.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12125) Retrying UnknownHostException on a proxy does not actually retry hostname resolution

2018-11-12 Thread John Zhuge (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684427#comment-16684427
 ] 

John Zhuge commented on HADOOP-12125:
-

Working on it [~medb].

> Retrying UnknownHostException on a proxy does not actually retry hostname 
> resolution
> 
>
> Key: HADOOP-12125
> URL: https://issues.apache.org/jira/browse/HADOOP-12125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Jason Lowe
>Assignee: John Zhuge
>Priority: Major
>
> When RetryInvocationHandler attempts to retry an UnknownHostException the 
> hostname fails to be resolved again.  The InetSocketAddress in the 
> ConnectionId has cached the fact that the hostname is unresolvable, and when 
> the proxy tries to setup a new Connection object with that ConnectionId it 
> checks if the (cached) resolution result is unresolved and immediately throws.
> The end result is we sleep and retry for no benefit.  The hostname resolution 
> is never attempted again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15923) create-release script should set max-cache-ttl as well as default-cache-ttl for gpg-agent

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16684246#comment-16684246
 ] 

Hadoop QA commented on HADOOP-15923:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15923 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947870/HADOOP-15923.01.patch 
|
| Optional Tests |  dupname  asflicense  shellcheck  shelldocs  |
| uname | Linux c26962c43adf 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 18fe65d |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15508/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> create-release script should set max-cache-ttl as well as default-cache-ttl 
> for gpg-agent
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl as well as default-cache-ttl for gpg-agent

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
Summary: create-release script should set max-cache-ttl as well as 
default-cache-ttl for gpg-agent  (was: create-release script should set 
max-cache-ttl for gpg-agent as well as default-cache-ttl)

> create-release script should set max-cache-ttl as well as default-cache-ttl 
> for gpg-agent
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl for gpg-agent as well as default-cache-ttl

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
Status: Patch Available  (was: Open)

> create-release script should set max-cache-ttl for gpg-agent as well as 
> default-cache-ttl
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl for gpg-agent as well as default-cache-ttl

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
Attachment: HADOOP-15923.01.patch

> create-release script should set max-cache-ttl for gpg-agent as well as 
> default-cache-ttl
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: HADOOP-15923.01.patch
>
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl for gpg-agent as well as default-cache-ttl

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
Description: 
create-release script sets default-cache-ttl for gpg-agent to 14400 (4 hours), 
however, max-cache-ttl is not set and the default value is 7200 (2 hours).
If mvn full install takes more than 2 hours, gpg-agent will fail to sign.

  was:
create-release script sets default-cache-ttl for gpg-agent to 14400 (4 hours), 
however, max-cache-ttl is not set and the default value is 7200 (2 hours).
If mvn full install takes more than 2 hours, gpg-agent will fail to sign.

Unfortunately maven full install in branch-2.9.2 took 2.5 hours in my Docker on 
MacBookPro.


Unfortunately maven full install in branch-2.9.2 took 2.5 hours in my Docker on 
MacBookPro.

> create-release script should set max-cache-ttl for gpg-agent as well as 
> default-cache-ttl
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15923) create-release script should set max-cache-ttl for gpg-agent as well as default-cache-ttl

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15923:
---
Description: 
create-release script sets default-cache-ttl for gpg-agent to 14400 (4 hours), 
however, max-cache-ttl is not set and the default value is 7200 (2 hours).
If mvn full install takes more than 2 hours, gpg-agent will fail to sign.

Unfortunately maven full install in branch-2.9.2 took 2.5 hours in my Docker on 
MacBookPro.

  was:
create-release script sets default-cache-ttl for gpg-agent to 14400 (4 hours), 
however, max-cache-ttl is not set and the default value is 7200 (2 hours).
If mvn full install takes more than 2 hours, gpg-agent will fail to sign.

Maven full install in branch-2.9.2 takes 2.5 hours in my Docker on MacBookPro.


> create-release script should set max-cache-ttl for gpg-agent as well as 
> default-cache-ttl
> -
>
> Key: HADOOP-15923
> URL: https://issues.apache.org/jira/browse/HADOOP-15923
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
>
> create-release script sets default-cache-ttl for gpg-agent to 14400 (4 
> hours), however, max-cache-ttl is not set and the default value is 7200 (2 
> hours).
> If mvn full install takes more than 2 hours, gpg-agent will fail to sign.
> Unfortunately maven full install in branch-2.9.2 took 2.5 hours in my Docker 
> on MacBookPro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15923) create-release script should set max-cache-ttl for gpg-agent as well as default-cache-ttl

2018-11-12 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15923:
--

 Summary: create-release script should set max-cache-ttl for 
gpg-agent as well as default-cache-ttl
 Key: HADOOP-15923
 URL: https://issues.apache.org/jira/browse/HADOOP-15923
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


create-release script sets default-cache-ttl for gpg-agent to 14400 (4 hours), 
however, max-cache-ttl is not set and the default value is 7200 (2 hours).
If mvn full install takes more than 2 hours, gpg-agent will fail to sign.

Maven full install in branch-2.9.2 takes 2.5 hours in my Docker on MacBookPro.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683987#comment-16683987
 ] 

Hadoop QA commented on HADOOP-15922:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947819/HADOOP-15922.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a542ecdae351 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3c9d97b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15507/testReport/ |
| Max. process+thread count | 1623 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15507/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Delegation

[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-12 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15922:
-
Status: Patch Available  (was: Open)

submit v001 patch and trigger Jenkins. UT will follow later.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15922.001.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-12 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15922:
-
Attachment: HADOOP-15922.001.patch

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15922.001.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15110) Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683793#comment-16683793
 ] 

Hudson commented on HADOOP-15110:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15406 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15406/])
HADOOP-15110. Gauges are getting logged in exceptions from (stevel: rev 
3c9d97b8f7d6eb75f08fc6d37cee37c22760bb86)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeFloat.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeInt.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableGaugeLong.java


> Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds
> --
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: metrics, security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Harshakiran Reddy
>Assignee: LiXin Ge
>Priority: Minor
> Fix For: 3.1.2
>
> Attachments: HADOOP-15110.001.patch
>
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683789#comment-16683789
 ] 

Steve Loughran commented on HADOOP-14556:
-

+aw, see comment on 
 
https://issues.apache.org/jira/browse/HADOOP-12563?focusedCommentId=16635508&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16635508

 

I think we should harden dtclient, I'm just trying to avoid JAR-spanning-code 
changes here or adding too many dependencies on other patches. This is 
big/complex enough as it is

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-12 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HADOOP-15922:


 Summary: DelegationTokenAuthenticationFilter get wrong doAsUser 
since it does not decode URL
 Key: HADOOP-15922
 URL: https://issues.apache.org/jira/browse/HADOOP-15922
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, kms
Reporter: He Xiaoqiao
Assignee: He Xiaoqiao


DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
client is complete kerberos name (e.g., user/hostn...@realm.com, actually it is 
acceptable), because DelegationTokenAuthenticationFilter does not decode DOAS 
parameter in URL which is encoded by {{URLEncoder}} at client.
e.g. KMS as example:
a. KMSClientProvider creates connection to KMS Server using 
DelegationTokenAuthenticatedURL#openConnection.
b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} with 
url encoded user as one parameter of http request. 
{code:java}
// proxyuser
if (doAs != null) {
  extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
}
{code}
c. when KMS server receives the request, it does not decode the proxy user.

As result, KMS Server will get the wrong proxy user if this proxy user is 
complete Kerberos Name or it includes some special character. Some other 
authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15808) Harden Token service loader use

2018-11-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683781#comment-16683781
 ] 

Steve Loughran commented on HADOOP-15808:
-

bq. I don't fully understand why the Iterator#next() triggers the exception but 
service loading has lazy behaviors so I guess that must be it.

Exactly. They build a list of classes to load, but the load is only in the 
next() call. And ~everything which uses the service loader API doesn't expect 
failures here.

w.r.t those test options

#  Create a fake type that triggers this by not having the dependencies.
# Spy some of the types to trigger the exception always.

I thought of #1 but felt it was too risky; imagine if it was in 
hadoop-common-test and somehow that JAR ended up on the CP of a hadoop version 
which didn't have the hardening. You'd not be able to load any tokens.

The other option would be factor out the service loading into some templated 
class for use everywhere: you pass in the details needed for the load, get back 
the list of successfully loaded classes. We could also take an arg about log 
level of errors (debug, info, error). For FS loading, debug is all we need. 

With this factored out, the test would be to load some explicitly failing 
service purely for hadoop-common-test, so it wouldn't interfere with token 
loading -and we'd have something we knew worked everywhere.



> Harden Token service loader use
> ---
>
> Key: HADOOP-15808
> URL: https://issues.apache.org/jira/browse/HADOOP-15808
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.9.1, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15808-001.patch, HADOOP-15808-002.patch, 
> HADOOP-15808-003.patch
>
>
> The Hadoop token service loading (identifiers, renewers...) works provided 
> there's no problems loading any registered implementation. If there's a 
> classloading or classcasting problem, the exception raised will stop all 
> token support working; possibly the application not starting.
> This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on 
> the classpath. It probably lurks in the wasb/abfs support too, but things 
> have worked there because the installations with DT support there have always 
> had correctly set up classpaths.
> Fix: do what we did for the FS service loader. Catch failures to instantiate 
> a service provider impl and skip it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15110) Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15110:

   Resolution: Fixed
Fix Version/s: 3.1.2
   Status: Resolved  (was: Patch Available)

+1

committed to branch-3.1+1.

happy to backport to earlier versions if you want it there too.

> Gauges are getting logged in exceptions from AutoRenewalThreadForUserCreds
> --
>
> Key: HADOOP-15110
> URL: https://issues.apache.org/jira/browse/HADOOP-15110
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: metrics, security
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Harshakiran Reddy
>Assignee: LiXin Ge
>Priority: Minor
> Fix For: 3.1.2
>
> Attachments: HADOOP-15110.001.patch
>
>
> *scenario*:
> -
> While Running the renewal command for principal it's printing the direct 
> objects for *renewalFailures *and *renewalFailuresTotal*
> {noformat}
> bin> ./hdfs dfs -ls /
> 2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
> encountered while running the renewal command for principal_name. (TGT end 
> time:1513070122000, renewalFailures: 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
> ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
> renewing credentials
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
> at org.apache.hadoop.util.Shell.run(Shell.java:887)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
> at 
> org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *Expected Result*:
> it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15921) UGI.createLoginUser to log token filename & token identifiers on load

2018-11-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15921:
---

 Summary: UGI.createLoginUser to log token filename & token 
identifiers on load
 Key: HADOOP-15921
 URL: https://issues.apache.org/jira/browse/HADOOP-15921
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 3.1.1
Reporter: Steve Loughran


{{UGI.createLoginUser()}} can read in a tokenfile and add the tokens, but it 
doesn't log anything when it does this

proposed: log at debug:
# the path of the file being loaded
# all the tokens being read in.

Listing the tokens needs to be hardened against failures in decode/toString.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683715#comment-16683715
 ] 

Hadoop QA commented on HADOOP-15920:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 25 new + 10 unchanged 
- 0 fixed = 35 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
45s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.contract.localfs.TestLocalFSContractSeek |
|   | hadoop.fs.contract.rawlocal.TestRawlocalContractSeek |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947799/HADOOP-15870-001.diff 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3f6b4ec

[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683651#comment-16683651
 ] 

Steve Loughran commented on HADOOP-14556:
-

Allen, 

good q. 

# Dtutil only fetches DTs if UGI is in secure mode, whereas fetchdt asks the FS 
irrespective of the local security state. Therefore it can issue DTs without 
Kerberos. You can't use them for job submission as MR's token fetching (also 
used by Distcp) requires Kerberos, as does the spark token collection. But you 
can use the tokens collected by fetchdt in other apps, as the [latest relase of 
cloudstore 
does|https://github.com/steveloughran/cloudstore/releases/tag/tag_2018_11_09b]

# Because the probe for "Are tokens available" doesn't take the FS URI , the 
impl has to say "yes" without knowing if the FS actually does.

# Dtutil expects that when a token is requested, the impl always returns 1+ 
token. Because s3a token issuing is optional (as it is on azure, abfs), if you 
ask the FS for a token and it doesn't issue one, you get a stack trace (Array 
out of bounds or something similar)

For fetch DT to work in this world, it needs

* service loading to be resilient to classpath problems (FWIW, so does whole 
token mechanism: HADOOP-15808)
* FS (or at least s3a FS) code to say "true" whenever probed to see if tokens 
are available
* dtutil to be ready to handle the case where "no tokens actually get issued" 
(at the very least make it an option)

that means: changes in DTutil, and the fs binding




> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15872) ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2

2018-11-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683565#comment-16683565
 ] 

Steve Loughran commented on HADOOP-15872:
-

OK, LGTM, +1

Ping me as soon as the new endpoint is live and we'll switch branch3.2+ to this 
API

> ABFS: Update to target 2018-11-09 REST version for ADLS Gen 2
> -
>
> Key: HADOOP-15872
> URL: https://issues.apache.org/jira/browse/HADOOP-15872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15872-001.patch, HADOOP-15872-002.patch, 
> HADOOP-15872-003.patch
>
>
> This update to the latest REST version (2018-11-09) will make the following 
> changes to the ABFS driver:
> 1) The ABFS implementation of getFileStatus currently requires read 
> permission.  According to HDFS permissions guide, it should only require 
> execute on the parent folders (traversal access).  A new REST API has been 
> introduced in REST version "2018-11-09" of ADLS Gen 2 to fix this problem.
> 2) The new "2018-11-09" REST version introduces support to i) automatically 
> translate UPNs to OIDs when setting the owner, owning group, or ACL and ii) 
> optionally translate OIDs to UPNs in the responses when getting the owner, 
> owning group, or ACL.  Configuration will be introduced to optionally 
> translate OIDs to UPNs in the responses.  Since translation has a performance 
> impact, the default will be to perform no translation and return the OIDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15871:

Attachment: (was: HADOOP-15870-001.diff)

> Some input streams does not obey "java.io.InputStream.available" contract 
> --
>
> Key: HADOOP-15871
> URL: https://issues.apache.org/jira/browse/HADOOP-15871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Reporter: Shixiong Zhu
>Priority: Major
>
> E.g,  DFSInputStream  and S3AInputStream return the size of the remaining 
> available bytes, but the javadoc of "available" says it should "Returns an 
> estimate of the number of bytes that can be read (or skipped over) from this 
> input stream *without blocking* by the next invocation of a method for this 
> input stream."
> I understand that some applications may rely on the current behavior. It 
> would be great that there is an interface to document how "available" should 
> be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15871:

Status: Open  (was: Patch Available)

> Some input streams does not obey "java.io.InputStream.available" contract 
> --
>
> Key: HADOOP-15871
> URL: https://issues.apache.org/jira/browse/HADOOP-15871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Reporter: Shixiong Zhu
>Priority: Major
>
> E.g,  DFSInputStream  and S3AInputStream return the size of the remaining 
> available bytes, but the javadoc of "available" says it should "Returns an 
> estimate of the number of bytes that can be read (or skipped over) from this 
> input stream *without blocking* by the next invocation of a method for this 
> input stream."
> I understand that some applications may rely on the current behavior. It 
> would be great that there is an interface to document how "available" should 
> be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15620

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Summary: get patch for S3a nextReadPos(), through Yetus  (was: get patch 
for HADOOP-15870, S3a nextReadPos(), through yeus)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Patch Available  (was: Open)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Attachment: HADOOP-15870-001.diff

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15871:

Status: Patch Available  (was: Open)

> Some input streams does not obey "java.io.InputStream.available" contract 
> --
>
> Key: HADOOP-15871
> URL: https://issues.apache.org/jira/browse/HADOOP-15871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Reporter: Shixiong Zhu
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>
> E.g,  DFSInputStream  and S3AInputStream return the size of the remaining 
> available bytes, but the javadoc of "available" says it should "Returns an 
> estimate of the number of bytes that can be read (or skipped over) from this 
> input stream *without blocking* by the next invocation of a method for this 
> input stream."
> I understand that some applications may rely on the current behavior. It 
> would be great that there is an interface to document how "available" should 
> be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15871) Some input streams does not obey "java.io.InputStream.available" contract

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15871:

Attachment: HADOOP-15870-001.diff

> Some input streams does not obey "java.io.InputStream.available" contract 
> --
>
> Key: HADOOP-15871
> URL: https://issues.apache.org/jira/browse/HADOOP-15871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3
>Reporter: Shixiong Zhu
>Priority: Major
> Attachments: HADOOP-15870-001.diff
>
>
> E.g,  DFSInputStream  and S3AInputStream return the size of the remaining 
> available bytes, but the javadoc of "available" says it should "Returns an 
> estimate of the number of bytes that can be read (or skipped over) from this 
> input stream *without blocking* by the next invocation of a method for this 
> input stream."
> I understand that some applications may rely on the current behavior. It 
> would be great that there is an interface to document how "available" should 
> be implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15920) get patch for HADOOP-15870, S3a nextReadPos(), through yeus

2018-11-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15920:
---

 Summary: get patch for HADOOP-15870, S3a nextReadPos(), through 
yeus
 Key: HADOOP-15920
 URL: https://issues.apache.org/jira/browse/HADOOP-15920
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, test
Affects Versions: 3.1.1
Reporter: Steve Loughran






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2018-11-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Target Version/s: 2.8.5, 2.9.1, 3.2.0, 3.0.4, 3.1.2  (was: 2.9.1, 3.2.0, 
2.8.5, 3.0.4, 3.1.2)
  Status: Open  (was: Patch Available)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1, 2.8.4
>Reporter: Shixiong Zhu
>Priority: Major
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16683406#comment-16683406
 ] 

Hadoop QA commented on HADOOP-15891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 22s{color} | {color:orange} root: The patch generated 1 new + 92 unchanged - 
3 fixed = 93 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15891 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947775/HDFS-13948.014.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c81f37e2cc71 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b5ec85d |
| maven | version: Apache Maven 3.3.9 |
|