[jira] [Commented] (HADOOP-12794) Support additional compression levels for GzipCodec

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142478#comment-15142478
 ] 

Hadoop QA commented on HADOOP-12794:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 7 
new + 13 unchanged - 0 fixed = 20 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 35s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787439/HADOOP-12794.0001.patch
 |
| JIRA Issue | HADOOP-12794 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e38242a11dc2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Updated] (HADOOP-12794) Support additional compression levels for GzipCodec

2016-02-11 Thread Ravi Mutyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Mutyala updated HADOOP-12794:
--
Attachment: HADOOP-12794.0001.patch

> Support additional compression levels for GzipCodec
> ---
>
> Key: HADOOP-12794
> URL: https://issues.apache.org/jira/browse/HADOOP-12794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Ravi Mutyala
>Assignee: Ravi Mutyala
> Fix For: 2.7.3
>
> Attachments: HADOOP-12794.0001.patch
>
>
> gzip supports compression levels 1-9. Compression level 4 seems to give best 
> compression per CPU time in some of our tests. Right now ZlibCompressor that 
> is used by GzipCodec only supports levels 1,9 and six (default). 
> Adding all the compression levels that are supported by native ZlibCompressor
> can provide more options to tweak compression levels. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12794) Support additional compression levels for GzipCodec

2016-02-11 Thread Ravi Mutyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Mutyala updated HADOOP-12794:
--
Release Note: Added New compression levels for GzipCodec that can be set in 
zlib.compress.level
  Status: Patch Available  (was: Open)

New compression levels can be used by setting zlib.compress.level

eg. -D zlib.compress.level=FOUR -D 
mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec


> Support additional compression levels for GzipCodec
> ---
>
> Key: HADOOP-12794
> URL: https://issues.apache.org/jira/browse/HADOOP-12794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.7.2
>Reporter: Ravi Mutyala
>Assignee: Ravi Mutyala
> Fix For: 2.7.3
>
>
> gzip supports compression levels 1-9. Compression level 4 seems to give best 
> compression per CPU time in some of our tests. Right now ZlibCompressor that 
> is used by GzipCodec only supports levels 1,9 and six (default). 
> Adding all the compression levels that are supported by native ZlibCompressor
> can provide more options to tweak compression levels. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Attachment: HADOOP-12746.03.patch

The v03 patch adds more unit tests.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch
>
>
> {{ReconfigurableBase}} does not always update the cached configuration after 
> a property is reconfigured.
> The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
> does not.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-02-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-12692:


Assignee: Wei-Chiu Chuang

> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12692.001.patch
>
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Description: 
{{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
configuration after a property is reconfigured. This means that configuration 
values queried via {{getConf().get(...)}} can be outdated. One way to fix it is 
{{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective value 
of the config setting and caller i.e. ReconfigurableBase will use it to update 
the configuration.

See discussion on HDFS-7035 for more background.

  was:
{{ReconfigurableBase}} does not always update the cached configuration after a 
property is reconfigured.

The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
does not.

See discussion on HDFS-7035 for more background.


> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Attachment: HADOOP-12746.04.patch

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12776) Remove getaclstatus call for non-acl commands in getfacl.

2016-02-11 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144123#comment-15144123
 ] 

Vinayakumar B commented on HADOOP-12776:


+1 for the patch. 
Will commit shortly.


> Remove getaclstatus call for non-acl commands in getfacl.
> -
>
> Key: HADOOP-12776
> URL: https://issues.apache.org/jira/browse/HADOOP-12776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-12776.patch
>
>
> Remove getaclstatus call for non-acl commands in getfacl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12776) Remove getaclstatus call for non-acl commands in getfacl.

2016-02-11 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12776:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8
Thanks [~brahmareddy]


> Remove getaclstatus call for non-acl commands in getfacl.
> -
>
> Key: HADOOP-12776
> URL: https://issues.apache.org/jira/browse/HADOOP-12776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12776.patch
>
>
> Remove getaclstatus call for non-acl commands in getfacl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12776) Remove getaclstatus call for non-acl commands in getfacl.

2016-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144132#comment-15144132
 ] 

Hudson commented on HADOOP-12776:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9290 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9290/])
HADOOP-12776. Remove getaclstatus call for non-acl commands in getfacl. 
(vinayakumarb: rev c78740a979c1b434c6595b302bd376fc3d432509)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove getaclstatus call for non-acl commands in getfacl.
> -
>
> Key: HADOOP-12776
> URL: https://issues.apache.org/jira/browse/HADOOP-12776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12776.patch
>
>
> Remove getaclstatus call for non-acl commands in getfacl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12776) Remove getaclstatus call for non-acl commands in getfacl.

2016-02-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144133#comment-15144133
 ] 

Brahma Reddy Battula commented on HADOOP-12776:
---

Thanks a lot [~vinayrpet]

> Remove getaclstatus call for non-acl commands in getfacl.
> -
>
> Key: HADOOP-12776
> URL: https://issues.apache.org/jira/browse/HADOOP-12776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-12776.patch
>
>
> Remove getaclstatus call for non-acl commands in getfacl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12589) Fix intermittent test failure of TestCopyPreserveFlag

2016-02-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144191#comment-15144191
 ] 

Akira AJISAKA commented on HADOOP-12589:


LGTM, +1.

> Fix intermittent test failure of TestCopyPreserveFlag 
> --
>
> Key: HADOOP-12589
> URL: https://issues.apache.org/jira/browse/HADOOP-12589
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12589.001.patch, HADOOP-12589.002.patch
>
>
> Found this issue on HADOOP-11149.
> {quote}
> Tests run: 8, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 0.949 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestCopyPreserveFlag
> testDirectoryCpWithP(org.apache.hadoop.fs.shell.TestCopyPreserveFlag)  Time 
> elapsed: 0.616 sec  <<< ERROR!
> java.io.IOException: Mkdirs failed to create d0 (exists=false, 
> cwd=/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/2/testStat)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:449)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:856)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1150)
>   at 
> org.apache.hadoop.fs.shell.TestCopyPreserveFlag.initialize(TestCopyPreserveFlag.java:72)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144204#comment-15144204
 ] 

Hadoop QA commented on HADOOP-12787:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project: patch generated 1 new + 68 unchanged 
- 0 fixed = 69 total (was 68) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 14s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License 

[jira] [Assigned] (HADOOP-12786) "hadoop key" command usage is not documented

2016-02-11 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-12786:
--

Assignee: Xiao Chen

> "hadoop key" command usage is not documented
> 
>
> Key: HADOOP-12786
> URL: https://issues.apache.org/jira/browse/HADOOP-12786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Xiao Chen
>  Labels: newbie
>
> I found "hadoop key" command usage is not documented when reviewing HDFS-9784.
> In addition, we should document that uppercase is not allowed for key name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12786) "hadoop key" command usage is not documented

2016-02-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144217#comment-15144217
 ] 

Xiao Chen commented on HADOOP-12786:


Thanks [~ajisakaa] for filing this. I'd like to work on this one, and gain more 
knowledge.

> "hadoop key" command usage is not documented
> 
>
> Key: HADOOP-12786
> URL: https://issues.apache.org/jira/browse/HADOOP-12786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>  Labels: newbie
>
> I found "hadoop key" command usage is not documented when reviewing HDFS-9784.
> In addition, we should document that uppercase is not allowed for key name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-02-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144087#comment-15144087
 ] 

Wei-Chiu Chuang commented on HADOOP-12692:
--

Assigning this jira to me. I am seeing this maven failure much more frequently 
in the past month.
I googled a bit, and the similar failures happened as early as Aug 22 2015, in 
jenkins build Hadoop-Hdfs-trunk #, after HADOOP-12347. Fix mismatch 
parameter name in javadocs of AuthToken#setMaxInactives. Maybe this commit is 
not relevant, but offers an anchor to start looking.

> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12692.001.patch
>
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Attachment: HADOOP-12746.04.patch

v04 patch adds two more unit tests.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch, HADOOP-12746.04.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-11 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Attachment: (was: HADOOP-12746.04.patch)

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch, 
> HADOOP-12746.03.patch
>
>
> {{ReconfigurableBase#startReconfigurationTask}} does not update its cached 
> configuration after a property is reconfigured. This means that configuration 
> values queried via {{getConf().get(...)}} can be outdated. One way to fix it 
> is {{ReconfigurableBase#reconfigurePropertyImpl}} returns the new effective 
> value of the config setting and caller i.e. ReconfigurableBase will use it to 
> update the configuration.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-02-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Attachment: HADOOP-12747.03.patch

Posted patch v.3.

Added a check for a non-local wildcard path, and a warning logging for an empty 
directory.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-1822) Allow SOCKS proxy configuration to remotely access the DFS and submit Jobs

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-1822:
-
Priority: Minor  (was: Critical)

> Allow SOCKS proxy configuration to remotely access the DFS and submit Jobs
> --
>
> Key: HADOOP-1822
> URL: https://issues.apache.org/jira/browse/HADOOP-1822
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Christophe Taton
>Assignee: Christophe Taton
>Priority: Minor
> Fix For: 0.15.0
>
> Attachments: 1822_2007-10-02_2.patch, 1822_2007-10-03_2.patch
>
>
> The purpose of this issue is to introduce a new configuration entry to setup 
> SOCKS proxy for DFS and JobTracker clients.
> This enable users to remotely access the DFS and submit Jobs as if they were 
> directly connected to the cluster Hadoop runs on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
Status: Patch Available  (was: Open)

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
Attachment: HADOOP-12795.001.patch

It appears there was some consideration of logging stack traces at some point.  
There is a {{KMSExceptionsProvider#log}} method that would log the stack trace, 
but I can't find any code that calls that method.  Maybe we just need to add a 
call to that method when there is an internal server error.  (See attached 
patch.)

[~asuresh], could you please comment?  Thank you.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-11 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143360#comment-15143360
 ] 

Greg Senia commented on HADOOP-9969:


[~acmurthy] can we have a quick discussion on this JIRA to find out what is 
going on with it.. I think Dan or Beth will work to set something up..


> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-3443) map outputs should not be renamed between partitions

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-3443:
-
Assignee: Owen O'Malley

> map outputs should not be renamed between partitions
> 
>
> Key: HADOOP-3443
> URL: https://issues.apache.org/jira/browse/HADOOP-3443
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.17.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Critical
> Fix For: 0.18.0
>
> Attachments: hadoop-3443-1.patch, hadoop-3443-1v17.patch
>
>
> If a map finishes with out having to spill its data buffer, the map outputs 
> are sorted and written to disk. However, no care is taken to make sure that 
> the same partition is used to write it out before it is renamed. On nodes 
> with multiple disks assigned to the task trackers, this will likely cause an 
> addition read/write cycle to disk that is very expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12795:
--

 Summary: KMS does not log detailed stack trace for unexpected 
errors.
 Key: HADOOP-12795
 URL: https://issues.apache.org/jira/browse/HADOOP-12795
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Reporter: Chris Nauroth


If the KMS server encounters an unexpected error resulting in an HTTP 500 
response, it does not log the stack trace.  This makes it difficult to 
troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143286#comment-15143286
 ] 

Hadoop QA commented on HADOOP-12787:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 11s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143439#comment-15143439
 ] 

Wei-Chiu Chuang commented on HADOOP-12795:
--

Hi [~cnauroth], I've recently encountered similar issue with KMS returning code 
500, and it would be great to add this log. We ended up looking at 
kms-catalina.log and found indirect evidence to the root cause. Have you 
checked that log too?

As for the patch, the log you added seem to be used in test only, not in 
production.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12791) Convert tests to use JUnit4

2016-02-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142520#comment-15142520
 ] 

Steve Loughran commented on HADOOP-12791:
-

If you are going to do this, 

# better to use a @rule to specify a timeout, rather than a per-test declaration
# if you have a base test class that has the rule extends Assert, and has a 
{{@before setup() }} & {{@after teardown()}}

 then migration is very straightforward: just add @Test to each test* method, 

> Convert tests to use JUnit4
> ---
>
> Key: HADOOP-12791
> URL: https://issues.apache.org/jira/browse/HADOOP-12791
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability, unittest
>
> Similar to HDFS-3711 and HDFS-3583, convert Hadoop Common tests that use 
> JUnit3 to use JUnit4.
> JUnit4 is better over JUnit3 as it can specify additional properties such as 
> timeout.
> Currently, there are 34 test files that potentially use JUnit3:
> ./hadoop-common/src/test/java/org/apache/hadoop/net/TestScriptBasedMappingWithDependency.java:public
>  class TestScriptBasedMappingWithDependency extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/net/TestScriptBasedMapping.java:public
>  class TestScriptBasedMapping extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericsUtil.java:public
>  class TestGenericsUtil extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestRunJar.java:public 
> class TestRunJar extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestFileBasedIPList.java:public
>  class TestFileBasedIPList extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestCacheableIPList.java:public
>  class TestCacheableIPList extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestIndexedSort.java:public
>  class TestIndexedSort extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestAsyncDiskService.java:public
>  class TestAsyncDiskService extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java:public
>  class TestNativeLibraryChecker extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java:public
>  class TestGenericOptionsParser extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/metrics/TestMetricsServlet.java:public
>  class TestMetricsServlet extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/metrics/spi/TestOutputRecord.java:public
>  class TestOutputRecord extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java:public 
> class TestTrash extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java:public class 
> TestDU extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java:public class 
> TestPath extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java:public
>  class TestLocalFileSystemPermission extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestTruncatedInputBug.java:public
>  class TestTruncatedInputBug extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestGetFileBlockLocations.java:public
>  class TestGetFileBlockLocations extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/permission/TestFsPermission.java:public
>  class TestFsPermission extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestAvroFSInput.java:public
>  class TestAvroFSInput extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFs.java:public 
> class TestFilterFs extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/TestGlobExpander.java:public
>  class TestGlobExpander extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java:public
>  abstract class FileSystemContractBaseTest extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java:public
>  class TestFairCallQueue extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java:public
>  class TestToken extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/security/TestWhitelistBasedResolver.java:public
>  class TestWhitelistBasedResolver extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java:public
>  class TestAuthenticationFilter extends TestCase {
> ./hadoop-common/src/test/java/org/apache/hadoop/log/TestLog4Json.java:public 
> class TestLog4Json extends TestCase {
> 

[jira] [Updated] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12787:

Attachment: HADOOP-12878.00.patch

Attach an initial patch that creates proxy user for webhdfs dfsclient to get 
KMS client provider. I've tested this manually with curl/webhdfs and will add 
unit test later.

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12878.00.patch
>
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12787:

Status: Patch Available  (was: Open)

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12878.00.patch
>
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-11 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-12666:
---
Status: In Progress  (was: Patch Available)

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12548) read s3 creds from a Credential Provider

2016-02-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143620#comment-15143620
 ] 

Chris Nauroth commented on HADOOP-12548:


# Instead of catching and logging {{IOException}} from the 
{{Configuration#getPassword}} calls, I'm wondering if it makes more sense to 
just let the exceptions propagate up through {{S3AFileSystem#initialize}} and 
let them abort initialization.  Catching and proceeding might put the process 
into unusual states that are difficult for an operator to reason about.  For 
example, suppose the credential provider is a keystore file saved to an HDFS 
URL, and HDFS goes down after successful retrieval of the access key but before 
retrieval of the secret key.  That would leave the process running in a state 
where initialization succeeded, but it doesn't really have complete 
credentials, and access to S3 will fail.
# In {{S3AFileSystem#getAWSAccessKeys}}:
{code}
if (accessKey == null || secretKey == null) {
  throw new IOException("Cannot find AWS access or secret key. required!");
}
{code}
I don't think we can throw an exception here if there is no access key/secret 
key in configuration.  This would break environments that don't configure 
credentials in Hadoop configuration and instead rely on one of the other 
providers in the chain, like {{InstanceProfileCredentialsProvider}}.  It's OK 
to construct an instance of {{BasicAWSCredentialsProvider}} using null values.  
It will throw an {{AmazonClientException}} later when anything tries to get 
credentials from it.  The logic of 
{{AWSCredentialsProviderChain#getCredentials}} is to iterate through each 
provider in the chain, try to get credentials from it, and ignore exceptions.  
The first provider that doesn't throw an exception and returns non-null 
credentials will be used.
# Just a minor nit-pick: please use lower-case "test" for the test method names.

> read s3 creds from a Credential Provider
> 
>
> Key: HADOOP-12548
> URL: https://issues.apache.org/jira/browse/HADOOP-12548
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Allen Wittenauer
>Assignee: Larry McCay
> Attachments: CredentialProviderAPIforS3FS-002.pdf, 
> HADOOP-12548-01.patch, HADOOP-12548-02.patch, HADOOP-12548-03.patch, 
> HADOOP-12548-04.patch, HADOOP-12548-05.patch, HADOOP-12548-06.patch
>
>
> It would be good if we could read s3 creds from a source other than via a 
> java property/Hadoop configuration option



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-02-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143798#comment-15143798
 ] 

Sangjin Lee commented on HADOOP-12747:
--

This is a question for [~cnauroth] (or anyone who's intimately familiar with 
how we support the libjars argument). You mentioned earlier that libjars don't 
support non-local paths, but strictly speaking HADOOP-7112 addresses only the 
aspect of adding libjars back to the client classpath. And I know for a fact 
today one can use hdfs URLs in libjars successfully (minus putting them in the 
client classpath). Is it an accidental behavior while the official position is 
that we don't support them? If we do support them in libjars, then we probably 
need to support wildcards for them as well. Any feedback is greatly appreciated.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-02-11 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12699:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks for sticking with this one Xiao, committed to trunk, branch-2, 
branch-2.8. Thanks to all reviewers who helped out here too.

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.02.patch, 
> HADOOP-12699.03.patch, HADOOP-12699.04.patch, HADOOP-12699.06.patch, 
> HADOOP-12699.07.patch, HADOOP-12699.08.patch, HADOOP-12699.09.patch, 
> HADOOP-12699.10.1.patch, HADOOP-12699.10.patch, HADOOP-12699.repro.2, 
> HADOOP-12699.repro.patch, generated.10.html
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143745#comment-15143745
 ] 

Jitendra Nath Pandey commented on HADOOP-12787:
---

[~xyao], thanks for the patch. I have one suggestion:
{{public DFSClient(URI nameNodeUri, Configuration conf, boolean proxyUser)}}
Instead of passing a boolean, please pass the ugi. The WebHdfsHandler should 
have the logic to construct the right UGI.
This will make the constructor more generic.

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12878.00.patch
>
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.  [~jojochuang] and 
[~asuresh], thank you for reviewing.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143823#comment-15143823
 ] 

Hudson commented on HADOOP-12795:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9286 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9286/])
HADOOP-12795. KMS does not log detailed stack trace for unexpected (cnauroth: 
rev 70c756d35e6ed5608ce82d1a6fbfb02e19af5ecf)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java


> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143515#comment-15143515
 ] 

Hadoop QA commented on HADOOP-12747:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 103 unchanged - 7 fixed = 103 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 32s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 17s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787520/HADOOP-12747.03.patch 
|
| JIRA Issue | HADOOP-12747 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1ba7dbcb723e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
Labels: supportability  (was: )

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
Attachment: HADOOP-12795.002.patch

bq. maybe you can also add the log to the other INTERNAL_SERVER_ERROR clause as 
well ?

[~asuresh], thanks for the review, and good catch.  Here is patch v002.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12795:
---
Target Version/s: 2.8.0

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143591#comment-15143591
 ] 

Hadoop QA commented on HADOOP-12795:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787530/HADOOP-12795.002.patch
 |
| JIRA Issue | HADOOP-12795 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d263e2bde355 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-11492) Bump up curator version to 2.7.1

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11492:
--
Release Note: 

Apache Curator version change: Apache Hadoop has updated the version of Apache 
Curator used from 2.6.0 to 2.7.1. This change should be binary and source 
compatible for the majority of downstream users. Notable exceptions:
* Binary incompatible change: 
org.apache.curator.utils.PathUtils.validatePath(String) changed return types. 
Downstream users of this method will need to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedCountReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedValueReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.

Downstream users are reminded that while the Hadoop community will attempt to 
avoid egregious incompatible dependency changes, there is currently no policy 
around when Hadoop's exposed dependencies will change across versions (ref 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath).

  was:
Apache Curator version change: Apache Hadoop has updated the version of Apache 
Curator used from 2.6.0 to 2.7.1. This change should be binary and source 
compatible for the majority of downstream users. Notable exceptions:
# Binary incompatible change: 
org.apache.curator.utils.PathUtils.validatePath(String) changed return types. 
Downstream users of this method will need to recompile.
# Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedCountReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.
# Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedValueReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.

Downstream users are reminded that while the Hadoop community will attempt to 
avoid egregious incompatible dependency changes, there is currently no policy 
around when Hadoop's exposed dependencies will change across versions (ref 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath).


> Bump up curator version to 2.7.1
> 
>
> Key: HADOOP-11492
> URL: https://issues.apache.org/jira/browse/HADOOP-11492
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
> Fix For: 2.7.0
>
> Attachments: hadoop-11492-1.patch, hadoop-11492-2.patch, 
> hadoop-11492-3.patch, hadoop-11492-3.patch
>
>
> Curator 2.7.1 got released recently and contains CURATOR-111 that YARN-2716 
> requires. 
> PS: Filing a common JIRA so folks from other sub-projects also notice this 
> change and shout out if there are any reservations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10787:
--
Release Note: 

The following shell environment variables have been deprecated:

| Old | New |
|: |: |
| DEFAULT_LIBEXEC_DIR | HADOOP_DEFAULT_LIBEXEC_DIR |
| SLAVE_NAMES | HADOOP_SLAVE_NAMES |
| TOOL_PATH | HADOOP_TOOLS_PATH |

In addition:

* DEFAULT_LIBEXEC_DIR will NOT be automatically transitioned to 
HADOOP_DEFAULT_LIBEXEC_DIR and will require changes to any scripts setting that 
value.  A warning will be printed to the screen if DEFAULT_LIBEXEC_DIR has been 
configured.
* HADOOP_TOOLS_PATH is now properly handled as a multi-valued, Java 
classpath-style variable.  Prior, multiple values assigned to TOOL_PATH would 
not work a predictable manner.


  was:

The following shell environment variables have been deprecated:

| Old | New |
|: |: |
| DEFAULT_LIBEXEC_DIR | HADOOP_DEFAULT_LIBEXEC_DIR |
| SLAVE_NAMES | HADOOP_SLAVE_NAMES |
| TOOL_PATH | HADOOP_TOOLS_PATH |

In addition:
* DEFAULT_LIBEXEC_DIR will NOT be automatically transitioned to 
HADOOP_DEFAULT_LIBEXEC_DIR and will require changes to any scripts setting that 
value.  A warning will be printed to the screen if DEFAULT_LIBEXEC_DIR has been 
configured.
* HADOOP_TOOLS_PATH is now properly handled as a multi-valued, Java 
classpath-style variable.  Prior, multiple values assigned to TOOL_PATH would 
not work a predictable manner.



> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Fix For: 3.0.0
>
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch, HADOOP-10787.03.patch, HADOOP-10787.04.patch, 
> HADOOP-10787.05.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-12795:
--

Assignee: Chris Nauroth

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143636#comment-15143636
 ] 

Chris Nauroth commented on HADOOP-12795:


bq. The patch doesn't appear to include any new or modified tests.

Once again, it's a logging change only, and I verified it manually.

I plan to commit this later today, based on Arun's prior "+1 pending one more 
change".

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>  Labels: supportability
> Attachments: HADOOP-12795.001.patch, HADOOP-12795.002.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-11 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143487#comment-15143487
 ] 

Sean Mackrory commented on HADOOP-12666:


One caveat to consider is overlap between configuration properties for 
different file systems. Generally the configuration for cloud storage uses a 
unique set of properties, which makes it safer / easier to have HDFS or other 
webhdfs services and cloud storage configured in the same cluster. This enables 
more diverse workloads and allows you to copy between filesystems easily. Since 
this implementation extends WebHDFS, there appears to be a need to overload 
configuration properties to use it. Should we modify this to use unique 
properties instead?

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143514#comment-15143514
 ] 

Hadoop QA commented on HADOOP-12795:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787524/HADOOP-12795.001.patch
 |
| JIRA Issue | HADOOP-12795 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5e83586de1e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143470#comment-15143470
 ] 

Chris Nauroth commented on HADOOP-12795:


[~jojochuang], thank you for your reply.

bq. We ended up looking at kms-catalina.log and found indirect evidence to the 
root cause. Have you checked that log too?

Thanks for the suggestion, but in my case, the catalina logs were not 
sufficient either.  There is a record of the error in kms-audit.log, and you 
can also enable request/response logging in kms.log by editing web.xml.  Both 
of these show the presence of an error and the short message, but nothing 
provides the full stack trace for a deeper look.

bq. As for the patch, the log you added seem to be used in test only, not in 
production.

I applied my patch and simulated an error by hard-coding a method to throw an 
unchecked exception.  With that, I could see the full stack trace in kms.log, 
so I think the patch is working for the production code.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143519#comment-15143519
 ] 

Chris Nauroth commented on HADOOP-12795:


bq. The patch doesn't appear to include any new or modified tests.

This is a change in logging only, which I have verified manually as per my last 
comment.

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12795) KMS does not log detailed stack trace for unexpected errors.

2016-02-11 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143518#comment-15143518
 ] 

Arun Suresh commented on HADOOP-12795:
--

[~cnauroth], Thanks for raising this..

Yup, the {{KMSExceptionsProvider#log}} method was written specifically for that 
purpose.. must've missed wiring it in..
Your patch should work.. maybe you can also add the log to the other 
INTERNAL_SERVER_ERROR clause as well ?
+1 pending that

> KMS does not log detailed stack trace for unexpected errors.
> 
>
> Key: HADOOP-12795
> URL: https://issues.apache.org/jira/browse/HADOOP-12795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Chris Nauroth
> Attachments: HADOOP-12795.001.patch
>
>
> If the KMS server encounters an unexpected error resulting in an HTTP 500 
> response, it does not log the stack trace.  This makes it difficult to 
> troubleshoot.  The client side exception cannot provide further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143859#comment-15143859
 ] 

Hudson commented on HADOOP-12699:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9287 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9287/])
HADOOP-12699. TestKMS#testKMSProvider intermittently fails during 'test (wang: 
rev 8fdef0bd9d1ece560ab4e1a1ec7fc77c46a034bb)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java


> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.02.patch, 
> HADOOP-12699.03.patch, HADOOP-12699.04.patch, HADOOP-12699.06.patch, 
> HADOOP-12699.07.patch, HADOOP-12699.08.patch, HADOOP-12699.09.patch, 
> HADOOP-12699.10.1.patch, HADOOP-12699.10.patch, HADOOP-12699.repro.2, 
> HADOOP-12699.repro.patch, generated.10.html
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12313) NPE in JvmPauseMonitor when calling stop() before start()

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12313:
--
Release Note: Allow stop() before start() completed in JvmPauseMonitor  
(was: HADOOP-12313 Allow stop() before start() completed in JvmPauseMonitor)

> NPE in JvmPauseMonitor when calling stop() before start()
> -
>
> Key: HADOOP-12313
> URL: https://issues.apache.org/jira/browse/HADOOP-12313
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Gabor Liptak
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12313.2.patch, HADOOP-12313.3.patch, 
> YARN-4035.1.patch
>
>
> It is observed that after YARN-4019 some tests are failing in 
> TestRMAdminService with null pointer exceptions in build [build failure 
> |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
> {noformat}
> unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.132 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
> testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.121 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10201) Add Listing Support to Key Management APIs

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10201:
--
Release Note:   (was: I just committed this. Thanks, Larry!)

> Add Listing Support to Key Management APIs
> --
>
> Key: HADOOP-10201
> URL: https://issues.apache.org/jira/browse/HADOOP-10201
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.6.0
>
> Attachments: 10201-2.patch, 10201-3.patch, 10201-4.patch, 
> 10201-5.patch, 10201.patch
>
>
> Extend the key management APIs from HADOOP-10141 to include the ability to 
> list the available keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10620) /docs/current doesn't point to the latest version 2.4.0

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10620:
--
Fix Version/s: (was: 2.6.0)

> /docs/current doesn't point to the latest version 2.4.0
> ---
>
> Key: HADOOP-10620
> URL: https://issues.apache.org/jira/browse/HADOOP-10620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Jacek Laskowski
>
> http://hadoop.apache.org/docs/current/ points to 2.3.0 while 2.4.0's out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10620) /docs/current doesn't point to the latest version 2.4.0

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10620:
--
Release Note:   (was: Verified http://hadoop.apache.org/docs/current/ link 
now point to current release (v2.6.0).)

> /docs/current doesn't point to the latest version 2.4.0
> ---
>
> Key: HADOOP-10620
> URL: https://issues.apache.org/jira/browse/HADOOP-10620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.4.0
>Reporter: Jacek Laskowski
> Fix For: 2.6.0
>
>
> http://hadoop.apache.org/docs/current/ points to 2.3.0 while 2.4.0's out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-02-11 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15143907#comment-15143907
 ] 

Xiao Chen commented on HADOOP-12699:


Thanks all for the reviews and discussions, and Andrew for committing! I 
definitely learned a lot in this. :)

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.02.patch, 
> HADOOP-12699.03.patch, HADOOP-12699.04.patch, HADOOP-12699.06.patch, 
> HADOOP-12699.07.patch, HADOOP-12699.08.patch, HADOOP-12699.09.patch, 
> HADOOP-12699.10.1.patch, HADOOP-12699.10.patch, HADOOP-12699.repro.2, 
> HADOOP-12699.repro.patch, generated.10.html
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11492) Bump up curator version to 2.7.1

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11492:
--
Release Note: 

Apache Curator version change: Apache Hadoop has updated the version of Apache 
Curator used from 2.6.0 to 2.7.1. This change should be binary and source 
compatible for the majority of downstream users. Notable exceptions:

* Binary incompatible change: 
org.apache.curator.utils.PathUtils.validatePath(String) changed return types. 
Downstream users of this method will need to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedCountReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedValueReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.

Downstream users are reminded that while the Hadoop community will attempt to 
avoid egregious incompatible dependency changes, there is currently no policy 
around when Hadoop's exposed dependencies will change across versions (ref 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath).

  was:

Apache Curator version change: Apache Hadoop has updated the version of Apache 
Curator used from 2.6.0 to 2.7.1. This change should be binary and source 
compatible for the majority of downstream users. Notable exceptions:
* Binary incompatible change: 
org.apache.curator.utils.PathUtils.validatePath(String) changed return types. 
Downstream users of this method will need to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedCountReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.
* Source incompatible change: 
org.apache.curator.framework.recipes.shared.SharedValueReader added a method to 
its interface definition. Downstream users with custom implementations of this 
interface can continue without binary compatibility problems but will need to 
modify their source code to recompile.

Downstream users are reminded that while the Hadoop community will attempt to 
avoid egregious incompatible dependency changes, there is currently no policy 
around when Hadoop's exposed dependencies will change across versions (ref 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Classpath).


> Bump up curator version to 2.7.1
> 
>
> Key: HADOOP-11492
> URL: https://issues.apache.org/jira/browse/HADOOP-11492
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Arun Suresh
> Fix For: 2.7.0
>
> Attachments: hadoop-11492-1.patch, hadoop-11492-2.patch, 
> hadoop-11492-3.patch, hadoop-11492-3.patch
>
>
> Curator 2.7.1 got released recently and contains CURATOR-111 that YARN-2716 
> requires. 
> PS: Filing a common JIRA so folks from other sub-projects also notice this 
> change and shout out if there are any reservations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11848) Incorrect arguments to sizeof in DomainSocket.c

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11848:
--
Release Note:   (was: Small one-line bug fix)

> Incorrect arguments to sizeof in DomainSocket.c
> ---
>
> Key: HADOOP-11848
> URL: https://issues.apache.org/jira/browse/HADOOP-11848
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native
> Fix For: 2.8.0
>
> Attachments: HADOOP-11848.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Length of buffer to be zeroed using sizeof , should not use the address of 
> the structure rather the structure itself.
> DomainSocket.c line 156
> Replace current:
> memset(,0,sizeof,());
> With:
> memset(, 0, sizeof(addr));



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11729) Fix link to cgroups doc in site.xml

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11729:
--
Release Note:   (was: Committed this to trunk, branch-2, and branch-2.7. 
Thanks Masatake for your contribution!)

> Fix link to cgroups doc in site.xml
> ---
>
> Key: HADOOP-11729
> URL: https://issues.apache.org/jira/browse/HADOOP-11729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11729.001.patch
>
>
> s/NodeManagerCGroups/NodeManagerCgroups/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11348) Remove unused variable from CMake error message for finding openssl

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11348:
--
Release Note:   (was: Test failure is unrelated.  Committed to 2.7.  
Thanks, Dian.)

> Remove unused variable from CMake error message for finding openssl
> ---
>
> Key: HADOOP-11348
> URL: https://issues.apache.org/jira/browse/HADOOP-11348
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11348.patch
>
>
> ERROR message for finding openssl should not print CUSTOM_OPENSSL_INCLUDE_DIR 
> because this variable don't exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12787:

Attachment: HADOOP-12878.01.patch

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12878.00.patch, HADOOP-12878.01.patch
>
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-11 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144059#comment-15144059
 ] 

Xiaoyu Yao commented on HADOOP-12787:
-

Thanks [~jnp] for the review and the helpful suggestions. I've attached a new 
patch based on your feedback.

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12878.00.patch, HADOOP-12878.01.patch
>
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12764) Increase default value of KMS maxHttpHeaderSize and make it configurable

2016-02-11 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12764:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~atm] for reviewing again! I verified Jenkins reported failures, both 
passing fine locally.

I just committed the patch to branch-2 and branch-2.8.

> Increase default value of KMS maxHttpHeaderSize and make it configurable
> 
>
> Key: HADOOP-12764
> URL: https://issues.apache.org/jira/browse/HADOOP-12764
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12764-branch-2.00.patch, HADOOP-12764.00.patch
>
>
> The Tomcat default value of {{maxHttpHeaderSize}} is 4096, which is too low 
> for certain Hadoop workloads. This JIRA proposes to change it to 65536 in 
> {{server.xml}} and make it configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)