[jira] [Updated] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12406:
-
Flags: Patch  (was: Patch,Important)

> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Assignee: Nadeem Douba
>Priority: Blocker
> Fix For: 2.7.3, 3.0.0-alpha1
>
> Attachments: HADOOP-12406.1.patch, HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initialize methods with Path in TrashPolicy

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13538:
-
Assignee: Yiqun Lin

> Deprecate getInstance and initialize methods with Path in TrashPolicy
> -
>
> Key: HADOOP-13538
> URL: https://issues.apache.org/jira/browse/HADOOP-13538
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch
>
>
> As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not 
> used anymore. We should deprecate these methods before removing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13163) Reuse pre-computed filestatus in Distcp-CopyMapper

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13163:
-
Assignee: Rajesh Balamohan

> Reuse pre-computed filestatus in Distcp-CopyMapper
> --
>
> Key: HADOOP-13163
> URL: https://issues.apache.org/jira/browse/HADOOP-13163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.8.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13163.001.patch
>
>
> https://github.com/apache/hadoop/blob/af942585a108d70e0946f6dd4c465a54d068eabf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java#L185
> targetStatus is already computed and it can be reused in checkUpdate() 
> function. This wouldn't be a major issue in NN/HDFS, but in the case of S3 
> getFileStatus calls can be expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12726:
-
Release Note: Unsupported FileSystem operations now throw an 
UnsupportedOperationException rather than an IOException.

> Unsupported FS operations should throw UnsupportedOperationException
> 
>
> Key: HADOOP-12726
> URL: https://issues.apache.org/jira/browse/HADOOP-12726
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12726.001.patch, HADOOP-12726.002.patch, 
> HADOOP-12726.003.patch
>
>
> In the {{FileSystem}} implementation classes, unsupported operations throw 
> {{new IOException("Not supported")}}, which makes it needlessly difficult to 
> distinguish an actual error from an unsupported operation.  They should 
> instead throw {{new UnsupportedOperationException()}}.
> It's possible that this anti-idiom is used elsewhere in the code base.  This 
> JIRA should include finding and cleaning up those instances as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13382:
-
Release Note: Dependencies on commons-httpclient have been removed. 
Projects with undeclared transitive dependencies on commons-httpclient, 
previously provided via hadoop-common or hadoop-client, may find this to be an 
incompatible change. Such project are also potentially exposed to the 
commons-httpclient CVE, and should be fixed for that reason as well.

> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.
> Other projects with undeclared transitive dependencies on commons-httpclient, 
> previously provided via hadoop-common or hadoop-client, may find this to be 
> an incompatible change.  Of course that also means such project is exposed to 
> the commons-httpclient CVE, and needs to be fixed for that reason as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12269) Update aws-sdk dependency to 1.10.6; move to aws-sdk-s3

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12269:
-
Release Note: The Maven dependency on aws-sdk has been changed to 
aws-sdk-s3 and the version bumped. Applications depending on transitive 
dependencies pulled in by aws-sdk and not aws-sdk-s3 might not work.

> Update aws-sdk dependency to 1.10.6; move to aws-sdk-s3
> ---
>
> Key: HADOOP-12269
> URL: https://issues.apache.org/jira/browse/HADOOP-12269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12269-001.patch, HADOOP-12269-002.patch
>
>
> This was originally part of HADOOP-11684, pulling out to this separate 
> subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12081) Fix UserGroupInformation.java to support 64-bit zLinux

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12081:
-
Flags:   (was: Important)

> Fix UserGroupInformation.java to support 64-bit zLinux
> --
>
> Key: HADOOP-12081
> URL: https://issues.apache.org/jira/browse/HADOOP-12081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
> Environment: zLinux
>Reporter: Adam Roberts
>Assignee: Akira Ajisaka
>  Labels: zlinux
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12081.001.patch
>
>
> Currently the 64 bit check in security/UserGroupInformation.java uses os.arch 
> and checks for "64". s390x is returned on IBM's z platform: s390x is 64 bit. 
> Without this change, if we try to use HDFS with Spark, we get a fatal error 
> (unable to login as we can't find a login class).
> This address fixes said issue by identifying s390x as a 64 bit platform and 
> thus allowing Spark to run on zLinux. A simple fix with very big implications!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447869#comment-15447869
 ] 

Hadoop QA commented on HADOOP-13546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 58 unchanged - 5 fixed = 65 total (was 63) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
9s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826121/HADOOP-13546-HADOOP-13436.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6fddec3fe9e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cd5e10c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10408/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10408/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10408/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10408/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
>  

[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.005.patch

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447774#comment-15447774
 ] 

Xiaobing Zhou commented on HADOOP-13546:


v005 fixed some check style issues.

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, 
> HADOOP-13546-HADOOP-13436.005.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447553#comment-15447553
 ] 

Hadoop QA commented on HADOOP-13546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 58 unchanged - 5 fixed = 65 total (was 63) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826105/HADOOP-13546-HADOOP-13436.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 915ba599e6ed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 05ede00 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10407/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10407/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10407/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10407/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
>  

[jira] [Comment Edited] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447443#comment-15447443
 ] 

Xiaobing Zhou edited comment on HADOOP-13546 at 8/29/16 11:50 PM:
--

Thanks [~jingzhao] for the reviews. I posted patch v004 addressed all your 
comments. The unit tests are separated into io.retry and hadoop.ipc packages 
since tests for reusing connections need access to classes in ipc space. 
TestReuseRpcConnections extending TestRpcBase makes it easy to setup rpc server.


was (Author: xiaobingo):
Thanks [~jingzhao] for the reviews. I posted patch v004 addressed all your 
comments. The unit tests are separated into io.retry and hadoop.ipc packages 
since tests for reusing connections need access to classes in ipc space. 

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: (was: HADOOP-13546-HADOOP-13436.004.patch)

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.004.patch

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447443#comment-15447443
 ] 

Xiaobing Zhou commented on HADOOP-13546:


Thanks [~jingzhao] for the reviews. I posted patch v004 addressed all your 
comments. The unit tests are separated into io.retry and hadoop.ipc packages 
since tests for reusing connections need access to classes in ipc space. 

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447436#comment-15447436
 ] 

Hadoop QA commented on HADOOP-13546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13546 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826098/HADOOP-13546-HADOOP-13436.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10406/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13546:
---
Attachment: HADOOP-13546-HADOOP-13436.004.patch

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12976) s3a toString to be meaningful in logs

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-12976.
--
Resolution: Duplicate

> s3a toString to be meaningful in logs
> -
>
> Key: HADOOP-12976
> URL: https://issues.apache.org/jira/browse/HADOOP-12976
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Fix For: 2.8.0
>
>
> today's toString value is just the object ref; better to include the URL of 
> the FS
> Example:
> {code}
> Cleaning filesystem org.apache.hadoop.fs.s3a.S3AFileSystem@1f069dc1 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2016-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447393#comment-15447393
 ] 

Andrew Wang commented on HADOOP-13438:
--

For future git log greppers, this JIRA # was typo'd as HADOOP-13483.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447391#comment-15447391
 ] 

Hadoop QA commented on HADOOP-12783:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787027/HADOOP-12783-1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9e61685196a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b57be1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10404/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10404/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Attachments: 

[jira] [Reopened] (HADOOP-13286) add a S3A scale test to do gunzip and linecount

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-13286:
--

> add a S3A scale test to do gunzip and linecount
> ---
>
> Key: HADOOP-13286
> URL: https://issues.apache.org/jira/browse/HADOOP-13286
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13286-branch-2-001.patch
>
>
> the HADOOP-13203 patch proposal showed that there were performance problems 
> downstream which weren't surfacing in the current scale tests.
> Trying to decompress the .gz test file and then go through it with LineReader 
> models a basic use case: parse a .csv.gz data source. 
> Add this, with metric printing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13286) add a S3A scale test to do gunzip and linecount

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13286.
--
   Resolution: Duplicate
Fix Version/s: (was: 2.8.0)

> add a S3A scale test to do gunzip and linecount
> ---
>
> Key: HADOOP-13286
> URL: https://issues.apache.org/jira/browse/HADOOP-13286
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13286-branch-2-001.patch
>
>
> the HADOOP-13203 patch proposal showed that there were performance problems 
> downstream which weren't surfacing in the current scale tests.
> Trying to decompress the .gz test file and then go through it with LineReader 
> models a basic use case: parse a .csv.gz data source. 
> Add this, with metric printing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12608) Fix exception message in WASB when connecting with anonymous credential

2016-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447385#comment-15447385
 ] 

Hudson commented on HADOOP-12608:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10370 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10370/])
HADOOP-12608. Fix exception message in WASB when connecting with (wang: rev 
8b57be108f9de3b74c5d6465828241fd436bcb99)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestFileSystemOperationExceptionMessage.java


> Fix exception message in WASB when connecting with anonymous credential
> ---
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12997) s3a to pass PositionedReadable contract tests, improve readFully perf.

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-12997.
--
   Resolution: Duplicate
Fix Version/s: (was: 2.8.0)

> s3a to pass PositionedReadable contract tests, improve readFully perf.
> --
>
> Key: HADOOP-12997
> URL: https://issues.apache.org/jira/browse/HADOOP-12997
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Fix s3a so that it passes the new tests in HADOOP-12994
> Also: optimise readFully so that instead of a sequence of seek-read-seek 
> operations, it does an opening seek and retains that position as it loops 
> through the data



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12976) s3a toString to be meaningful in logs

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12976:
-
Fix Version/s: (was: 2.8.0)

> s3a toString to be meaningful in logs
> -
>
> Key: HADOOP-12976
> URL: https://issues.apache.org/jira/browse/HADOOP-12976
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> today's toString value is just the object ref; better to include the URL of 
> the FS
> Example:
> {code}
> Cleaning filesystem org.apache.hadoop.fs.s3a.S3AFileSystem@1f069dc1 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12997) s3a to pass PositionedReadable contract tests, improve readFully perf.

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-12997:
--

> s3a to pass PositionedReadable contract tests, improve readFully perf.
> --
>
> Key: HADOOP-12997
> URL: https://issues.apache.org/jira/browse/HADOOP-12997
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Fix s3a so that it passes the new tests in HADOOP-12994
> Also: optimise readFully so that instead of a sequence of seek-read-seek 
> operations, it does an opening seek and retains that position as it loops 
> through the data



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12976) s3a toString to be meaningful in logs

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-12976:
--

> s3a toString to be meaningful in logs
> -
>
> Key: HADOOP-12976
> URL: https://issues.apache.org/jira/browse/HADOOP-12976
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Fix For: 2.8.0
>
>
> today's toString value is just the object ref; better to include the URL of 
> the FS
> Example:
> {code}
> Cleaning filesystem org.apache.hadoop.fs.s3a.S3AFileSystem@1f069dc1 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447360#comment-15447360
 ] 

Hadoop QA commented on HADOOP-12784:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12784 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787029/HADOOP-12784-1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 829b96964f55 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b57be1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10405/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10405/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestKMS failure: login options not compatible with IBM JDK
> --
>
> Key: HADOOP-12784
> URL: https://issues.apache.org/jira/browse/HADOOP-12784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Attachments: HADOOP-12784-1.patch
>
>
> When running test with IBM JDK, the 

[jira] [Updated] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12784:
-
Fix Version/s: (was: 2.7.1)

> TestKMS failure: login options not compatible with IBM JDK
> --
>
> Key: HADOOP-12784
> URL: https://issues.apache.org/jira/browse/HADOOP-12784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Attachments: HADOOP-12784-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
>  1.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartKerberosAuth
>  2.org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokenAccess
>  3.org.apache.hadoop.crypto.key.kms.server.TestKMS.testServicePrincipalACLs
>  4.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs
>  5.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKDTSM
>  6.org.apache.hadoop.crypto.key.kms.server.TestKMS.testACLs
>  7.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSignerAndDTSM
>  8.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSigner
>  9.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartSimpleAuth
>  10.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpKerberos
>  11.org.apache.hadoop.crypto.key.kms.server.TestKMS.testWebHDFSProxyUserKerb
>  12.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSBlackList
>  13.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsKerberos
>  14.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSAuthFailureRetry
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:262)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12783:
-
Fix Version/s: (was: 2.7.1)

> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Attachments: HADOOP-12783-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
> 1. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
> 2. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12762) task: null java.lang.unsupportedoperationexception: this is supposed to be overridden by subclasses.

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12762:
-
Fix Version/s: (was: 2.6.2)

> task: null java.lang.unsupportedoperationexception: this is supposed to be 
> overridden by subclasses.
> 
>
> Key: HADOOP-12762
> URL: https://issues.apache.org/jira/browse/HADOOP-12762
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.2
>Reporter: Padma
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12762) task: null java.lang.unsupportedoperationexception: this is supposed to be overridden by subclasses.

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-12762.
--
Resolution: Cannot Reproduce

> task: null java.lang.unsupportedoperationexception: this is supposed to be 
> overridden by subclasses.
> 
>
> Key: HADOOP-12762
> URL: https://issues.apache.org/jira/browse/HADOOP-12762
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.2
>Reporter: Padma
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12608) Fix exception message in WASB when connecting with anonymous credential

2016-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447263#comment-15447263
 ] 

Andrew Wang commented on HADOOP-12608:
--

I believe this was mistakenly not pushed to trunk, I went ahead and 
cherry-picked the branch-2 patch over.

> Fix exception message in WASB when connecting with anonymous credential
> ---
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12420) While trying to access Amazon S3 through hadoop-aws(Spark basically) I was getting Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.tr

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-12420.
--
   Resolution: Duplicate
Fix Version/s: (was: 2.8.0)

> While trying to access Amazon S3 through hadoop-aws(Spark basically) I was 
> getting Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> --
>
> Key: HADOOP-12420
> URL: https://issues.apache.org/jira/browse/HADOOP-12420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Tariq Mohammad
>Assignee: Tariq Mohammad
>Priority: Minor
>
> While trying to access data stored in Amazon S3 through Apache Spark, which  
> internally uses hadoop-aws jar I was getting the following exception :
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> Probable reason could be the fact that aws java sdk expects a long parameter 
> for the setMultipartUploadThreshold(long multiPartThreshold) method, but 
> hadoop-aws was using a parameter of type int(multiPartThreshold). 
> I tried using the downloaded hadoop-aws jar and the build through its maven 
> dependency, but in both the cases I encountered the same exception. Although 
> I can see private long multiPartThreshold; in hadoop-aws GitHub repo, it's 
> not getting reflected in the downloaded jar or in the jar created from maven 
> dependency.
> Following lines in the S3AFileSystem class create this difference :
> Build from trunk : 
> private long multiPartThreshold;
> this.multiPartThreshold = conf.getLong("fs.s3a.multipart.threshold", 
> 2147483647L); => Line 267
> Build through maven dependency : 
> private int multiPartThreshold;
> multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD, 
> DEFAULT_MIN_MULTIPART_THRESHOLD); => Line 249



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12420) While trying to access Amazon S3 through hadoop-aws(Spark basically) I was getting Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.tr

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-12420:
--

> While trying to access Amazon S3 through hadoop-aws(Spark basically) I was 
> getting Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> --
>
> Key: HADOOP-12420
> URL: https://issues.apache.org/jira/browse/HADOOP-12420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Tariq Mohammad
>Assignee: Tariq Mohammad
>Priority: Minor
>
> While trying to access data stored in Amazon S3 through Apache Spark, which  
> internally uses hadoop-aws jar I was getting the following exception :
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> Probable reason could be the fact that aws java sdk expects a long parameter 
> for the setMultipartUploadThreshold(long multiPartThreshold) method, but 
> hadoop-aws was using a parameter of type int(multiPartThreshold). 
> I tried using the downloaded hadoop-aws jar and the build through its maven 
> dependency, but in both the cases I encountered the same exception. Although 
> I can see private long multiPartThreshold; in hadoop-aws GitHub repo, it's 
> not getting reflected in the downloaded jar or in the jar created from maven 
> dependency.
> Following lines in the S3AFileSystem class create this difference :
> Build from trunk : 
> private long multiPartThreshold;
> this.multiPartThreshold = conf.getLong("fs.s3a.multipart.threshold", 
> 2147483647L); => Line 267
> Build through maven dependency : 
> private int multiPartThreshold;
> multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD, 
> DEFAULT_MIN_MULTIPART_THRESHOLD); => Line 249



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12425) Branch-2 pom has conflicting curator dependency declarations

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12425:
-
Fix Version/s: (was: 2.8.0)

> Branch-2 pom has conflicting curator dependency declarations
> 
>
> Key: HADOOP-12425
> URL: https://issues.apache.org/jira/browse/HADOOP-12425
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12425-branch-2-001.patch
>
>
> Post-HADOOP-11492 ; there is duplicate entries of curator in branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12419) branch-2 stopped compiling; DfsTestUtils not found from TestJMXGet

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12419:
-
Fix Version/s: (was: 2.8.0)

> branch-2 stopped compiling; DfsTestUtils not found from TestJMXGet
> --
>
> Key: HADOOP-12419
> URL: https://issues.apache.org/jira/browse/HADOOP-12419
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> Backporting HDFS-9072 has broken the branch-2 build; DFSUtil needs to be 
> pulled over from trunk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12319) S3AFastOutputStream has no ability to apply backpressure

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-12319.
--
   Resolution: Duplicate
Fix Version/s: (was: 2.8.0)

> S3AFastOutputStream has no ability to apply backpressure
> 
>
> Key: HADOOP-12319
> URL: https://issues.apache.org/jira/browse/HADOOP-12319
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Colin Marc
>Priority: Critical
>
> Currently, users of S3AFastOutputStream can control memory usage with a few 
> settings: {{fs.s3a.threads.core,max}}, which control the number of active 
> uploads (specifically as arguments to a {{ThreadPoolExecutor}}), and 
> {{fs.s3a.max.total.tasks}}, which controls the size of the feeding queue for 
> the {{ThreadPoolExecutor}}.
> However, a user can get an almost *guaranteed* crash if the throughput of the 
> writing job is higher than the total S3 throughput, because there is never 
> any backpressure or blocking on calls to {{write}}.
> If {{fs.s3a.max.total.tasks}} is set high (the default is 1000), then 
> {{write}} calls will continue to add data to the queue, which can eventually 
> OOM. But if the user tries to set it lower, then writes will fail when the 
> queue is full; the {{ThreadPoolExecutor}} will reject the part with 
> {{java.util.concurrent.RejectedExecutionException}}.
> Ideally, calls to {{write}} should *block, not fail* when the queue is full, 
> so as to apply backpressure on whatever the writing process is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12319) S3AFastOutputStream has no ability to apply backpressure

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-12319:
--

> S3AFastOutputStream has no ability to apply backpressure
> 
>
> Key: HADOOP-12319
> URL: https://issues.apache.org/jira/browse/HADOOP-12319
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Colin Marc
>Priority: Critical
>
> Currently, users of S3AFastOutputStream can control memory usage with a few 
> settings: {{fs.s3a.threads.core,max}}, which control the number of active 
> uploads (specifically as arguments to a {{ThreadPoolExecutor}}), and 
> {{fs.s3a.max.total.tasks}}, which controls the size of the feeding queue for 
> the {{ThreadPoolExecutor}}.
> However, a user can get an almost *guaranteed* crash if the throughput of the 
> writing job is higher than the total S3 throughput, because there is never 
> any backpressure or blocking on calls to {{write}}.
> If {{fs.s3a.max.total.tasks}} is set high (the default is 1000), then 
> {{write}} calls will continue to add data to the queue, which can eventually 
> OOM. But if the user tries to set it lower, then writes will fail when the 
> queue is full; the {{ThreadPoolExecutor}} will reject the part with 
> {{java.util.concurrent.RejectedExecutionException}}.
> Ideally, calls to {{write}} should *block, not fail* when the queue is full, 
> so as to apply backpressure on whatever the writing process is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12408) java8 build failing in javadocs at AuthenticationToken.java

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12408:
-
Fix Version/s: (was: 2.8.0)

> java8 build failing in javadocs at AuthenticationToken.java
> ---
>
> Key: HADOOP-12408
> URL: https://issues.apache.org/jira/browse/HADOOP-12408
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
> Environment: JDK8
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> Jenkins is failing in javadocs; {{AuthenticationToken.java}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12021) Augmenting Configuration to accomodate

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12021:
-
Fix Version/s: (was: 2.8.0)
   (was: 1.3.0)

> Augmenting Configuration to accomodate 
> 
>
> Key: HADOOP-12021
> URL: https://issues.apache.org/jira/browse/HADOOP-12021
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Reporter: Lewis John McGibbney
>Assignee: Lewis John McGibbney
>Priority: Minor
> Attachments: Screen Shot 2015-05-26 at 2.22.26 PM (2).png
>
>
> Over on the 
> [common-dev|http://www.mail-archive.com/common-dev%40hadoop.apache.org/msg16099.html]
>  ML I explained a use case which requires me to obtain the value of the 
> Configuration  tags.
> [~cnauroth] advised me to raise the issue to Jira for discussion.
> I am happy to provide a patch so that the  values are parsed out 
> of the various XML files and stored, and also that the Configuration class is 
> augmented to provide accessors to accommodate the use case.
> I wanted to find out what people think about this one and whether I should 
> check out Hadoop source and submit a patch. If you guys could provide some 
> advice it would be appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11874:
-
Fix Version/s: (was: 2.8.0)

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-11874.
--
Resolution: Duplicate

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-11874) s3a can throw spurious IOEs on close()

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-11874:
--

> s3a can throw spurious IOEs on close()
> --
>
> Key: HADOOP-11874
> URL: https://issues.apache.org/jira/browse/HADOOP-11874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> from a code review, it's clear that the issue seen in HADOOP-11851 can 
> surface in S3a, though with HADOOP-11570, it's less likely. It will only 
> happen on those cases when abort() isn't called.
> The "clean" close() code path needs to catch IOEs from the wrappedStream and 
> call abort() in that situation too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13544) JDiff reports unncessarily show unannotated APIs and cause confusion while our javadocs only show annotated and public APIs

2016-08-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447220#comment-15447220
 ] 

Wangda Tan commented on HADOOP-13544:
-

[~vinodkv],

Thanks for working on this issue, could you update Apache license header or 
exclude from rat plugin for following files?

- 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_JobClient_2.7.2.xml
- 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_2.7.2.xml
- 
hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_2.7.2.xml

Beyond this, patch looks good to me.

> JDiff reports unncessarily show unannotated APIs and cause confusion while 
> our javadocs only show annotated and public APIs
> ---
>
> Key: HADOOP-13544
> URL: https://issues.apache.org/jira/browse/HADOOP-13544
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: HADOOP-13544-20160825.txt
>
>
> Our javadocs only show annotated and @Public APIs (original JIRAs 
> HADOOP-7782, HADOOP-6658).
> But the jdiff shows all APIs that are not annotated @Private. This causes 
> confusion on how we read the reports and what APIs we really broke.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11003) org.apache.hadoop.util.Shell should not take a dependency on binaries being deployed when used as a library

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11003:
-
Fix Version/s: (was: 2.8.0)

> org.apache.hadoop.util.Shell should not take a dependency on binaries being 
> deployed when used as a library
> ---
>
> Key: HADOOP-11003
> URL: https://issues.apache.org/jira/browse/HADOOP-11003
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: Windows
>Reporter: Remus Rusanu
>Assignee: Steve Loughran
>
> HIVE-7845 shows how an exception is being thrown when 
> org.apache.hadoop.util.Shell is being used as a library, not as part of a 
> deployed Hadoop environment.
> {code}
> 13:20:00 [ERROR pool-2-thread-4 Shell.getWinUtilsPath] Failed to locate the 
> winutils binary in the hadoop binary path
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
> Hadoop binaries.
>at 
> org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:324)
>at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:339)
>at org.apache.hadoop.util.Shell.(Shell.java:332)
>at 
> org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:918)
>at 
> org.apache.hadoop.hive.conf.HiveConf$ConfVars.(HiveConf.java:228)
> {code}
> There are similar native dependencies (eg. NativeIO and hadoop.dll) that 
> handle lack of binaries with fallback to non-native code paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13271:
-
Fix Version/s: (was: 2.8.0)

> Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
> -
>
> Key: HADOOP-13271
> URL: https://issues.apache.org/jira/browse/HADOOP-13271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> I'm seeing an intermittent failure of 
> {{TestS3AContractRootDir.testListEmptyRootDirectory}}
> The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the 
> file to delete is root ...yet the code is passing in the children of /, not / 
> itself.
> hypothesis: when you call listStatus on an empty root dir, you get a file 
> entry back that says isFile, not isDirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13543) [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues

2016-08-29 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447120#comment-15447120
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-13543:
--

bq. FWIW I'm working on a wrapper for Java ACC that provides more user-friendly 
API reports than JDiff. My WIP patch should already be usable.
Sure, once we have something ready there, we can compare and contrast the 
reports.

This JIRA's focus is on fixing the unnecessary incompatible changes that jdiff 
already recognized and fix them so as to unblock 2.8.0 and 3.0.0-alpha1.

> [Umbrella] Analyse 2.8.0 and 3.0.0-alpha1 jdiff reports and fix any issues
> --
>
> Key: HADOOP-13543
> URL: https://issues.apache.org/jira/browse/HADOOP-13543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> Now that we have fixed JDiff report generation for 2.8.0 and above, we should 
> analyse them.
> For the previous releases, I was applying the jdiff patches myself, and 
> analysed them offline. It's better to track them here now that the reports 
> are automatically getting generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15447074#comment-15447074
 ] 

Hadoop QA commented on HADOOP-13535:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13535 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826066/HADOOP-13535.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb05b87f63ac 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6fcb04c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10403/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10403/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10403/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10403/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add jetty6 

[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: (was: 3.0.0-alpha1)
   (was: 2.6.2)

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: 3.0.0-alpha1

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.6.2, 3.0.0-alpha1
>
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: (was: 3.0.0-alpha1)

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.6.2
>
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha1

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.6.2, 3.0.0-alpha1
>
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: 2.6.2

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.6.2, 3.0.0-alpha1
>
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13409) Andrew's test JIRA

2016-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13409:
-
Fix Version/s: 3.0.0-alpha2

> Andrew's test JIRA
> --
>
> Key: HADOOP-13409
> URL: https://issues.apache.org/jira/browse/HADOOP-13409
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0-alpha2
>
>
> Test JIRA for JIRA interaction script



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Min Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Shen updated HADOOP-13535:
--
Attachment: HADOOP-13535.002.patch

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13559) Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs classes

2016-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446951#comment-15446951
 ] 

Hudson commented on HADOOP-13559:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10367 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10367/])
HADOOP-13559. Remove close() within try-with-resources in (liuml07: rev 
6fcb04c1780ac3dca5b986f1bcd558fffccb3eb9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java


> Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs 
> classes
> -
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13559) Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs classes

2016-08-29 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13559:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}}, {{branch-2}} and {{branch-2.8}}. Thanks for the patch, 
[~fabbri].

> Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs 
> classes
> -
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446915#comment-15446915
 ] 

Alejandro Abdelnur commented on HADOOP-13558:
-

[~xiaochen], i think, when given a Subject, the renewal of ticket is the 
responsibility of the owner of the Subject, so as you suggest {{isKeytab}} 
should be FALSE.

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13557) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-13557:

Summary: UserGroupInformation created from a Subject incorrectly tries to 
renew the Kerberos ticket  (was: UserGroupInformation created from a Subject 
incorrectly tries to renew the Keberos ticket)

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13557
> URL: https://issues.apache.org/jira/browse/HADOOP-13557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}) set to NULL is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to to the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-13558:

Summary: UserGroupInformation created from a Subject incorrectly tries to 
renew the Kerberos ticket  (was: UserGroupInformation created from a Subject 
incorrectly tries to renew the Keberos ticket)

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Kerberos ticket
> --
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13559) Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs classes

2016-08-29 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13559:
---
Summary: Remove close() within try-with-resources in 
ChecksumFileSystem/ChecksumFs classes  (was: Remove close() within 
try-with-resources)

> Remove close() within try-with-resources in ChecksumFileSystem/ChecksumFs 
> classes
> -
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13557) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446854#comment-15446854
 ] 

Alejandro Abdelnur commented on HADOOP-13557:
-

Thanks [~xiaochen], jira hang on me on the first submission and didn't report 
the ticket creation, and clicked create again.

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13557
> URL: https://issues.apache.org/jira/browse/HADOOP-13557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}) set to NULL is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to to the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13559) Remove close() within try-with-resources

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446835#comment-15446835
 ] 

Hadoop QA commented on HADOOP-13559:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13559 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826033/HADOOP-13559-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f14fae2cd10d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6742fb6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10402/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10402/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove close() within try-with-resources
> 
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13559-001.patch
>
>
> My 

[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446832#comment-15446832
 ] 

Xiao Chen commented on HADOOP-13558:


Thanks Tucu for reporting the issue.

IIUC, the concern is that, with {{loginUserFromSubject}}, 
{{checkTGTAndReloginFromKeytab}} always throws the IOE, since {{isKeytab == 
true}} and {{keytabFile == null}}?

I'm not entirely familiar with this ground, so not sure how feasible / secure 
it is to set {{keytabFile}} from subject If not, maybe we should set 
{{isKeytab}} to false. This will also get rid of the second problem, since no 
renewal will be done.
Alternate solution of not throwing IOE (just log and return) seems incompatible.

I'm hoping [~revans2] or [~daryn] as the initial author to HADOOP-10164 can 
shed some lights on this, thanks in advance!

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to do the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Min Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446819#comment-15446819
 ] 

Min Shen commented on HADOOP-13535:
---

[~zhz] [~jojochuang],

Do you think a unit test is needed in this case?

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Attachments: HADOOP-13535.001.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator

2016-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13556:
-
Status: Patch Available  (was: Open)

Resubmitting the v002 version of this patch since I am unable to reproduce the 
unit test failure and don't think it is related to the patch. The checkstyles 
are supporting an existing pattern and should probably be allowed.

> Change Configuration.getPropsWithPrefix to use getProps instead of iterator
> ---
>
> Key: HADOOP-13556
> URL: https://issues.apache.org/jira/browse/HADOOP-13556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch
>
>
> Current implementation of getPropsWithPrefix uses the 
> Configuration.iterator() method. This method is not threadsafe.
> This patch will reimplement the gathering of properties that begin with a 
> prefix by using the safe getProps() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator

2016-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13556:
-
Status: Open  (was: Patch Available)

> Change Configuration.getPropsWithPrefix to use getProps instead of iterator
> ---
>
> Key: HADOOP-13556
> URL: https://issues.apache.org/jira/browse/HADOOP-13556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch
>
>
> Current implementation of getPropsWithPrefix uses the 
> Configuration.iterator() method. This method is not threadsafe.
> This patch will reimplement the gathering of properties that begin with a 
> prefix by using the safe getProps() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446810#comment-15446810
 ] 

Hadoop QA commented on HADOOP-13535:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13535 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826053/HADOOP-13535.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c549ba1988b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6742fb6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10401/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10401/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10401/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10401/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
>

[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-29 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446763#comment-15446763
 ] 

Andras Bokor commented on HADOOP-7363:
--

Thanks a lot [~anu],

Should this patch be available on branch-2? It only adds some JUnit tests.
In addition, this ticket was blocked by one of my other tickets: HADOOP-13073. 
HADOOP-13073 was not merged into branch-2 so {{testMkdirsWithUmask}} will not 
pass.

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13540) improve section on troubleshooting s3a auth problems

2016-08-29 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446744#comment-15446744
 ] 

Aaron Fabbri commented on HADOOP-13540:
---

Thanks for continuing to improve docs.  +1 (non-binding)

> improve section on troubleshooting s3a auth problems
> 
>
> Key: HADOOP-13540
> URL: https://issues.apache.org/jira/browse/HADOOP-13540
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13540-001.patch
>
>
> We should add more on how to go about diagnosing s3a auth problems. 
> When it happens, the need to keep the credentials secret makes it hard to 
> automate diagnostics; we can at least provide a better runbook for users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13559) Remove close() within try-with-resources

2016-08-29 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13559:
--
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Remove close() within try-with-resources
> 
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Min Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Shen updated HADOOP-13535:
--
Status: Patch Available  (was: Open)

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Attachments: HADOOP-13535.001.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Min Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Shen updated HADOOP-13535:
--
Attachment: HADOOP-13535.001.patch

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
> Attachments: HADOOP-13535.001.patch
>
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446690#comment-15446690
 ] 

Zhe Zhang commented on HADOOP-13535:


Thanks Min for taking on this work. Please try again.

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13535:
---
Assignee: Min Shen

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Min Shen
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2

2016-08-29 Thread Min Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446660#comment-15446660
 ] 

Min Shen commented on HADOOP-13535:
---

Have patch available for this issue.
However, I don't seem to be able to upload the patch.

> Add jetty6 acceptor startup issue workaround to branch-2
> 
>
> Key: HADOOP-13535
> URL: https://issues.apache.org/jira/browse/HADOOP-13535
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>
> After HADOOP-12765 is committed to branch-2, the handling of SSL connection 
> by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We 
> should consider adding the same workaround for SSL connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13559) Remove close() within try-with-resources

2016-08-29 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446648#comment-15446648
 ] 

Mingliang Liu commented on HADOOP-13559:


Nice catch! +1 pending on Jenkins.

> Remove close() within try-with-resources
> 
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage

2016-08-29 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446617#comment-15446617
 ] 

Jing Zhao commented on HADOOP-13546:


Thanks for the work, Xiaobing. Some comments:
# No need to add the new {{tryOnceThenFail}} method which is only used by unit 
tests.
# Any reason TestConnectionRetryPolicy needs to extend TestRpcBase? 
# Looks like this unit test should be moved to o.a.h.io.retry package.
# In WrapperRetryPolicy, the semantic of {{equals}} and {{hashCode}} does not 
consider remoteExceptionToRetry. Maybe you can add some java comment to explain 
why.
# WrapperRetryPolicy's constructor does not need to be public.
# In the following code, let's also check if remoteExceptionToRetry is null.
{code}
  } else if (e instanceof RemoteException) {
final RemoteException re = (RemoteException)e;
p = remoteExceptionToRetry.equals(re.getClassName())?
multipleLinearRandomRetry: RetryPolicies.TRY_ONCE_THEN_FAIL;
{code}
# In the unit test, a lot of assertEquals(true/false, x.equals(y)) can be 
replaced by assertEquals(x, y) or assertNotEquals(x, y).
# "testAnonymousRetryPolicy..." should be "testDefaultRetryPolicy..."
# We can pass a RetryPolicy[] to {{verifyRetryPolicyEquivalence}} and use a 
simple "for" loop to compare each pair of retry policies.
# Let's also add some new unit tests to test cases when we call 
{{RetryUtils.getDefaultRetryPolicy}} but providing different configurations (to 
enable to disable retry policy) and specifications (of 
multipleLinearRandomRetry).

> Override equals and hashCode to avoid connection leakage
> 
>
> Key: HADOOP-13546
> URL: https://issues.apache.org/jira/browse/HADOOP-13546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-13546-HADOOP-13436.000.patch, 
> HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, 
> HADOOP-13546-HADOOP-13436.003.patch
>
>
> Override #equals and #hashcode to ensure multiple instances are equivalent. 
> They eventually
> share the same RPC connection given the other arguments of constructing 
> ConnectionId are
> the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky

2016-08-29 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446564#comment-15446564
 ] 

Mingliang Liu commented on HADOOP-13375:


The v3 patch looks good overall to me.

This turns out a non-trivial fix for the test. My last concern is about the 
{code}
624 waitForGroupCounters(groups, 3, 2, 0, 0);
625
626 // After 120ms all should have completed running
627 waitForGroupCounters(groups, 0, 0, 5, 0);
{code}
If the background reload threads run really fast (and the main test thread is 
preempted somehow), is it possible that the main test thread misses the first 
point of condition and fails? If true, we can:
# coordinate the maint test thread and the 
{{FakeGroupMapping#delayIfNecessary()}} using latch or barrier
# or simply to increase the delay interval 
{{FakeGroupMapping.setGetGroupsDelayMs(40);}} and the time out in 
{{waitForGroupCounters()}} so that the chance of failure is largely reduced.

> o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
> --
>
> Key: HADOOP-13375
> URL: https://issues.apache.org/jira/browse/HADOOP-13375
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Weiwei Yang
> Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, 
> HADOOP-13375.003.patch
>
>
> h5. Error Message
> bq. expected:<1> but was:<0>
> h5. Stacktrace
> {quote}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13558:
---
Description: 
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
{{IOException}} in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to do the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.


  was:
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
{{IOException}} in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.



> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> 

[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-13558:

Description: 
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}} set to NULL) is what triggers the 
{{IOException}} in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.


  was:
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}} set to NULL is what triggers the {{IOException}} 
in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.



> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 

[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-13558:

Description: 
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}} set to NULL is what triggers the {{IOException}} 
in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.


  was:
The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}) set to NULL is what triggers the {{IOException}} 
in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.



> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13558
> URL: https://issues.apache.org/jira/browse/HADOOP-13558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 

[jira] [Resolved] (HADOOP-13557) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HADOOP-13557.

Resolution: Duplicate

Seems like a duplicate of HADOOP-13558?
Given there are already some watchers there, I'm closing this one as a dup. 
Let's follow up over there. Thanks for creating this [~tucu00].

> UserGroupInformation created from a Subject incorrectly tries to renew the 
> Keberos ticket
> -
>
> Key: HADOOP-13557
> URL: https://issues.apache.org/jira/browse/HADOOP-13557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>
> The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions 
> and if they are met it invokes the {{reloginFromKeytab()}}. The 
> {{reloginFromKeytab()}} method then fails with an {{IOException}} 
> "loginUserFromKeyTab must be done first" because there is no keytab 
> associated with the UGI.
> The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
> ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
> it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
> {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
> {{IOException}}.
> The root of the problem seems to be when creating a UGI via the 
> {{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
> {{UserGroupInformation(Subject)}} constructor, and this constructor does the 
> following to determine if there is a keytab or not.
> {code}
>   this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
> {code}
> If the {{Subject}} given had a keytab, then the UGI instance will have the 
> {{isKeytab}} set to TRUE.
> It sets the UGI instance as it would have a keytab because the Subject has a 
> keytab. This has 2 problems:
> First, it does not set the keytab file (and this, having the {{isKeytab}} set 
> to TRUE and the {{keytabFile}) set to NULL is what triggers the 
> {{IOException}} in the method {{reloginFromKeytab()}}.
> Second (and even if the first problem is fixed, this still is a problem), it 
> assumes that because the subject has a keytab it is up to UGI to to the 
> relogin using the keytab. This is incorrect if the UGI was created using the 
> {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
> Subject is not the UGI, but the caller, so the caller is responsible for 
> renewing the Kerberos tickets and the UGI should not try to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-29 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12765:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, 
> HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, 
> blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-29 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446455#comment-15446455
 ] 

Zhe Zhang commented on HADOOP-12765:


Thanks for the feedback [~jojochuang]. I resolved both conflicts and backported 
this change to branch-2.7. Agreed HADOOP-12688 would be a nice improvement. I 
tried backporting but it was not quite clean.

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, 
> HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, 
> blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13559) Remove close() within try-with-resources

2016-08-29 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13559:
--
Attachment: HADOOP-13559-001.patch

Attaching v1 patch.  Hat tip to [~mackrorysd] for catching this.

> Remove close() within try-with-resources
> 
>
> Key: HADOOP-13559
> URL: https://issues.apache.org/jira/browse/HADOOP-13559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13559-001.patch
>
>
> My colleague noticed that HADOOP-12994 introduced to places where close() was 
> still called manually within a try-with-resources block.
> I'll attach a patch to remove the manual close() calls. 
> These extra calls to close() are probably safe, as InputStream is a Closable, 
> not AutoClosable (the later does not specify close() as idempotent).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-08-29 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12765:
---
Fix Version/s: 2.7.4

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, 
> HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, 
> HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, 
> blocking_2.png, unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13559) Remove close() within try-with-resources

2016-08-29 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-13559:
-

 Summary: Remove close() within try-with-resources
 Key: HADOOP-13559
 URL: https://issues.apache.org/jira/browse/HADOOP-13559
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0, 2.9.0
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri
Priority: Minor


My colleague noticed that HADOOP-12994 introduced to places where close() was 
still called manually within a try-with-resources block.

I'll attach a patch to remove the manual close() calls. 

These extra calls to close() are probably safe, as InputStream is a Closable, 
not AutoClosable (the later does not specify close() as idempotent).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2016-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446429#comment-15446429
 ] 

Hadoop QA commented on HADOOP-13496:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
2 new + 10 unchanged - 1 fixed = 12 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13496 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823506/HADOOP-13496.v1.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 58f6a6bdd055 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1ad598 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10400/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10400/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10400/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
> 

[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446384#comment-15446384
 ] 

Hudson commented on HADOOP-7363:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10365 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10365/])
HADOOP-7363. TestRawLocalFileSystemContract is needed. Contributed by 
(aengineer: rev e1ad598cef61cbd3a6f505f40221c8140a36b7e4)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestRawLocalFileSystemContract.java


> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2016-08-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13496:

Status: Patch Available  (was: Open)

> Include file lengths in Mismatch in length error for distcp
> ---
>
> Key: HADOOP-13496
> URL: https://issues.apache.org/jira/browse/HADOOP-13496
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13496.v1.patch
>
>
> Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
> Mismatch in length error:
> {code}
> 2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
> util.RetriableCommand(89): Failure in Retriable command: Copying 
> hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
> c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.
>171%2C53952%2C1471022508087.regiongroup-1.1471022510182
> java.io.IOException: Mismatch in length of 
> source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
>  and 
> target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
>   at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> {code}
> It would be helpful to include what's the expected length and what's the real 
> length.
> Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-29 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446357#comment-15446357
 ] 

Anu Engineer commented on HADOOP-7363:
--

[~boky01] Thanks for taking care of this. I have committed this to Trunk , I am 
not resolving this since there are some conflicts when applying to branch-2. 
Could you please post a patch for branch-2 too. I will use this same jira to 
commit to branch-2 too.



> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13552) RetryInvocationHandler logs all remote exceptions

2016-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446329#comment-15446329
 ] 

Hudson commented on HADOOP-13552:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10364 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10364/])
HADOOP-13552. RetryInvocationHandler logs all remote exceptions. (jlowe: rev 
92d8f371553b88e5b3a9d3354e93f75d60d81368)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java


> RetryInvocationHandler logs all remote exceptions
> -
>
> Key: HADOOP-13552
> URL: https://issues.apache.org/jira/browse/HADOOP-13552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-13552.001.patch
>
>
> RetryInvocationHandler logs a warning for any exception that it does not 
> retry.  There are many exceptions that the client can automatically handle, 
> like FileNotFoundException, UnresolvedPathException, etc., so now every one 
> of these generates a scary looking stack trace as a warning then the program 
> continues normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13552) RetryInvocationHandler logs all remote exceptions

2016-08-29 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-13552:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2, and branch-2.8.

> RetryInvocationHandler logs all remote exceptions
> -
>
> Key: HADOOP-13552
> URL: https://issues.apache.org/jira/browse/HADOOP-13552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-13552.001.patch
>
>
> RetryInvocationHandler logs a warning for any exception that it does not 
> retry.  There are many exceptions that the client can automatically handle, 
> like FileNotFoundException, UnresolvedPathException, etc., so now every one 
> of these generates a scary looking stack trace as a warning then the program 
> continues normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13552) RetryInvocationHandler logs all remote exceptions

2016-08-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446249#comment-15446249
 ] 

Jason Lowe commented on HADOOP-13552:
-

Thanks for the review, Jing!  Committing this.

> RetryInvocationHandler logs all remote exceptions
> -
>
> Key: HADOOP-13552
> URL: https://issues.apache.org/jira/browse/HADOOP-13552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-13552.001.patch
>
>
> RetryInvocationHandler logs a warning for any exception that it does not 
> retry.  There are many exceptions that the client can automatically handle, 
> like FileNotFoundException, UnresolvedPathException, etc., so now every one 
> of these generates a scary looking stack trace as a warning then the program 
> continues normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7363) TestRawLocalFileSystemContract is needed

2016-08-29 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446230#comment-15446230
 ] 

Anu Engineer commented on HADOOP-7363:
--

+1, Sorry I missed that. I will commit this shortly.

> TestRawLocalFileSystemContract is needed
> 
>
> Key: HADOOP-7363
> URL: https://issues.apache.org/jira/browse/HADOOP-7363
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Matt Foley
>Assignee: Andras Bokor
> Attachments: HADOOP-7363.01.patch, HADOOP-7363.02.patch, 
> HADOOP-7363.03.patch, HADOOP-7363.04.patch, HADOOP-7363.05.patch, 
> HADOOP-7363.06.patch
>
>
> FileSystemContractBaseTest is supposed to be run with each concrete 
> FileSystem implementation to insure adherence to the "contract" for 
> FileSystem behavior.  However, currently only HDFS and S3 do so.  
> RawLocalFileSystem, at least, needs to be added. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11086) Upgrade jets3t to 0.9.2

2016-08-29 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11086:
-
Assignee: (was: Sean Busbey)
  Status: Open  (was: Patch Available)

The odds that I'll find the time to test updating this in time for 2.8 are very 
low. If I'm wrong I'll reassign to myself, but for now it's best if someone 
else looks after this.

> Upgrade jets3t to 0.9.2
> ---
>
> Key: HADOOP-11086
> URL: https://issues.apache.org/jira/browse/HADOOP-11086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Matteo Bertozzi
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11086-v0.patch, HADOOP-11086.2.patch
>
>
> jets3t 0.9.2 contains a fix that caused failure of multi-part uploads with 
> service-side encryption.
> http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html
> (it also removes an exception thrown from the RestS3Service constructor which 
> requires removing the try/catch around that code)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13557) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-13557:
---

 Summary: UserGroupInformation created from a Subject incorrectly 
tries to renew the Keberos ticket
 Key: HADOOP-13557
 URL: https://issues.apache.org/jira/browse/HADOOP-13557
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.4, 2.7.2, 3.0.0-alpha2
Reporter: Alejandro Abdelnur


The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}) set to NULL is what triggers the {{IOException}} 
in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Keberos ticket

2016-08-29 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-13558:
---

 Summary: UserGroupInformation created from a Subject incorrectly 
tries to renew the Keberos ticket
 Key: HADOOP-13558
 URL: https://issues.apache.org/jira/browse/HADOOP-13558
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.4, 2.7.2, 3.0.0-alpha2
Reporter: Alejandro Abdelnur


The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions and 
if they are met it invokes the {{reloginFromKeytab()}}. The 
{{reloginFromKeytab()}} method then fails with an {{IOException}} 
"loginUserFromKeyTab must be done first" because there is no keytab associated 
with the UGI.

The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab 
({{isKeytab}} UGI instance variable) associated with the UGI, if there is one 
it triggers a call to {{reloginFromKeytab()}}. The problem is that the 
{{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned 
{{IOException}}.


The root of the problem seems to be when creating a UGI via the 
{{UGI.loginUserFromSubject(Subject)}} method, this method uses the 
{{UserGroupInformation(Subject)}} constructor, and this constructor does the 
following to determine if there is a keytab or not.

{code}
  this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject);
{code}

If the {{Subject}} given had a keytab, then the UGI instance will have the 
{{isKeytab}} set to TRUE.

It sets the UGI instance as it would have a keytab because the Subject has a 
keytab. This has 2 problems:

First, it does not set the keytab file (and this, having the {{isKeytab}} set 
to TRUE and the {{keytabFile}) set to NULL is what triggers the {{IOException}} 
in the method {{reloginFromKeytab()}}.

Second (and even if the first problem is fixed, this still is a problem), it 
assumes that because the subject has a keytab it is up to UGI to to the relogin 
using the keytab. This is incorrect if the UGI was created using the 
{{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the 
Subject is not the UGI, but the caller, so the caller is responsible for 
renewing the Kerberos tickets and the UGI should not try to do so.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >