[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12537:

Assignee: (was: John Zhuge)

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-12537:
---

Assignee: John Zhuge

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.diff, HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279565#comment-15279565
 ] 

Hadoop QA commented on HADOOP-12942:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 111 unchanged - 78 fixed = 111 total (was 189) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 54s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803345/HADOOP-12942.008.patch
 |
| JIRA Issue | HADOOP-12942 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 54186218ac46 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Updated] (HADOOP-13128) Manage Hadoop RPC resource usage via resource coupon

2016-05-10 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13128:

Description: 
HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
cluster resource manager currently manages the CPU and Memory resources for 
jobs/tasks but not the storage resources such as HDFS namenode and datanode 
usage directly. As a result of that, a high priority Yarn Job may send too many 
RPC requests to HDFS namenode and get demoted into low priority call queues due 
to lack of reservation/coordination. 

To better support multi-tenancy use cases like above, we propose to manage RPC 
server resource usage via coupon mechanism integrated with YARN. The idea is to 
allow YARN request HDFS storage resource coupon (e.g., namenode RPC calls, 
datanode I/O bandwidth) from namenode on behalf of the job upon submission 
time.  Once granted, the tasks will include the coupon identifier in RPC header 
for the subsequent calls. HDFS namenode RPC scheduler maintains the state of 
the coupon usage based on the scheduler policy (fairness or priority) to match 
the RPC priority with the YARN scheduling priority. 



  was:
HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
cluster resource manager currently manages the CPU and Memory resources for 
jobs/tasks but not the storage resources such as HDFS namenode and datanode 
usage directly. As a result of that, a high priority Yarn Job may send too many 
RPC requests to HDFS namenode call queue and get demoted into low priority 
namenode call queue due to lack of coordination. 

To better support multi-tenancy use cases like above, we propose to manage RPC 
server resource usage via coupon mechanism integrated with YARN. The idea is to 
allow YARN request HDFS storage resource coupon (e.g., namenode RPC calls, 
datanode I/O bandwidth) from namenode on behalf of the job upon submission 
time.  Once granted, the tasks will include the coupon identifier in RPC header 
for the subsequent calls. HDFS namenode RPC scheduler maintains the state of 
the coupon usage based on the scheduler policy (fairness or priority) to match 
the RPC priority with the YARN scheduling priority. 




> Manage Hadoop RPC resource usage via resource coupon
> 
>
> Key: HADOOP-13128
> URL: https://issues.apache.org/jira/browse/HADOOP-13128
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
> ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
> cluster resource manager currently manages the CPU and Memory resources for 
> jobs/tasks but not the storage resources such as HDFS namenode and datanode 
> usage directly. As a result of that, a high priority Yarn Job may send too 
> many RPC requests to HDFS namenode and get demoted into low priority call 
> queues due to lack of reservation/coordination. 
> To better support multi-tenancy use cases like above, we propose to manage 
> RPC server resource usage via coupon mechanism integrated with YARN. The idea 
> is to allow YARN request HDFS storage resource coupon (e.g., namenode RPC 
> calls, datanode I/O bandwidth) from namenode on behalf of the job upon 
> submission time.  Once granted, the tasks will include the coupon identifier 
> in RPC header for the subsequent calls. HDFS namenode RPC scheduler maintains 
> the state of the coupon usage based on the scheduler policy (fairness or 
> priority) to match the RPC priority with the YARN scheduling priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279395#comment-15279395
 ] 

Hadoop QA commented on HADOOP-13125:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 13s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 48s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803282/HADOOP-13125-001.patch
 |
| JIRA Issue | HADOOP-13125 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cd5961d77379 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6e56578 |

[jira] [Commented] (HADOOP-13127) Correctly cache delegation tokens in KMSClientProvider

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279378#comment-15279378
 ] 

Hadoop QA commented on HADOOP-13127:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 6s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 661 
unchanged - 1 fixed = 662 total (was 662) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 18s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 55s {color} 
| 

[jira] [Commented] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279374#comment-15279374
 ] 

Hudson commented on HADOOP-10694:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9743 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9743/])
HADOOP-10694. Remove synchronized input streams from Writable (ozawa: rev 
6e565780315469584c47515be6bd189f07840f1b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DataInputBuffer.java


> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279363#comment-15279363
 ] 

Hadoop QA commented on HADOOP-13065:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 17s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 8 new + 663 
unchanged - 0 fixed = 671 total (was 663) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 50s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 8 new + 672 
unchanged - 0 fixed = 680 total (was 672) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 28s 
{color} | {color:red} root: The patch generated 3 new + 435 unchanged - 2 fixed 
= 438 total (was 437) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the 

[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279347#comment-15279347
 ] 

Larry McCay commented on HADOOP-12942:
--

Same thing happened to me yesterday.
I think that reporting on checkstyle errors in test classes must have just been 
turned back on or something.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-10 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-10694:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~gopalv] and [~rajesh.balamohan] 
for your contribution!

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279298#comment-15279298
 ] 

Tsuyoshi Ozawa commented on HADOOP-10694:
-

The test failures looks to be not related since TestDNS and 
TestReloadingX509TrustManager fails by DNS related problem.

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10694) Remove synchronized input streams from Writable deserialization

2016-05-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279295#comment-15279295
 ] 

Tsuyoshi Ozawa commented on HADOOP-10694:
-

+1, checking this in.

> Remove synchronized input streams from Writable deserialization
> ---
>
> Key: HADOOP-10694
> URL: https://issues.apache.org/jira/browse/HADOOP-10694
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10694.1.patch, HADOOP-10694.2.patch, 
> writable-read-sync.png
>
>
> Writable deserialization is slowing down due to a synchronized block within 
> DataInputBuffer$Buffer.
> ByteArrayInputStream::read() is synchronized and this shows up as a slow 
> uncontested lock.
> Hive ships with its own faster thread-unsafe version with 
> hive.common.io.NonSyncByteArrayInputStream.
> !writable-read-sync.png!
> The DataInputBuffer and Writable deserialization should not require a lock 
> per readInt()/read().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13128) Manage Hadoop RPC resource usage via resource coupon

2016-05-10 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-13128:
---

 Summary: Manage Hadoop RPC resource usage via resource coupon
 Key: HADOOP-13128
 URL: https://issues.apache.org/jira/browse/HADOOP-13128
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


HADOOP-9640 added RPC Fair Call Queue and HADOOP-10597 added RPC backoff to 
ensure the fairness usage of the HDFS namenode resources. YARN, the Hadoop 
cluster resource manager currently manages the CPU and Memory resources for 
jobs/tasks but not the storage resources such as HDFS namenode and datanode 
usage directly. As a result of that, a high priority Yarn Job may send too many 
RPC requests to HDFS namenode call queue and get demoted into low priority 
namenode call queue due to lack of coordination. 

To better support multi-tenancy use cases like above, we propose to manage RPC 
server resource usage via coupon mechanism integrated with YARN. The idea is to 
allow YARN request HDFS storage resource coupon (e.g., namenode RPC calls, 
datanode I/O bandwidth) from namenode on behalf of the job upon submission 
time.  Once granted, the tasks will include the coupon identifier in RPC header 
for the subsequent calls. HDFS namenode RPC scheduler maintains the state of 
the coupon usage based on the scheduler policy (fairness or priority) to match 
the RPC priority with the YARN scheduling priority. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12801) Suppress obsolete S3FileSystem tests.

2016-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279267#comment-15279267
 ] 

Hudson commented on HADOOP-12801:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9742 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9742/])
HADOOP-12801. Suppress obsolete S3FileSystem tests. Contributed by Chris 
(cnauroth: rev 27242f211e83079dfb6a75f2b1c8ba4a25751e59)
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/TestS3ContractRootDir.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3/TestS3ContractSeek.java
* 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3/TestS3Credentials.java


> Suppress obsolete S3FileSystem tests.
> -
>
> Key: HADOOP-12801
> URL: https://issues.apache.org/jira/browse/HADOOP-12801
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12801.001.patch
>
>
> There are several failures in the {{S3FileSystem}} tests that integrate with 
> the S3 back-end.  With {{S3FileSystem}} deprecation in progress, these tests 
> do not provide valuable feedback and cause noise in test runs for S3N bug 
> fixes and ongoing development of S3A.  We can suppress these obsolete tests 
> as a precursor to the deprecation and removal of {{S3FileSystem}} tracked in 
> HADOOP-12709.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.008.patch

Hopefully fixing checkstyle and whitespace issues in patch 8. I would have 
thought they'd have been detected in patch 6, but... oh well.

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch, HADOOP-12942.008.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Attachment: HADOOP-12942.007.patch

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Patch Available  (was: Open)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Mike Yoder (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Yoder updated HADOOP-12942:

Status: Open  (was: Patch Available)

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch, HADOOP-12942.007.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-10 Thread Amir Sanjar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amir Sanjar updated HADOOP-11505:
-
Priority: Critical  (was: Blocker)

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-10 Thread Amir Sanjar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279224#comment-15279224
 ] 

Amir Sanjar commented on HADOOP-11505:
--

Hi all,
Colin, thanks for opening this Jira and also thanks to Steve Loughran and 
Edward Nevill for finding it.
We in OpenPOWER community and IBM, as part of our Hadoop 3.0 development plan, 
have unexpectedly come across this issue, build break due to the embedded x86 
asm code :( 
Please, let me know if I can be of any assistance to you. Meanwhile we consider 
the latest Hadoop source code for Power architecture broken :( 
btw, could Hadoop community uncommitted these new x86 only enhancements? 

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-10 Thread Amir Sanjar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amir Sanjar updated HADOOP-11505:
-
Priority: Blocker  (was: Major)

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11505:
--
Priority: Major  (was: Blocker)

This isn't a blocker, because the affected architectures can fall back on the 
non-native code for accomplishing the same things.

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11505) Various native parts use bswap incorrectly and unportably

2016-05-10 Thread Amir Sanjar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amir Sanjar updated HADOOP-11505:
-
Priority: Blocker  (was: Major)

> Various native parts use bswap incorrectly and unportably
> -
>
> Key: HADOOP-11505
> URL: https://issues.apache.org/jira/browse/HADOOP-11505
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Alan Burlison
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
> HADOOP-11505.004.patch, HADOOP-11505.005.patch, HADOOP-11505.006.patch, 
> HADOOP-11505.007.patch, HADOOP-11505.008.patch
>
>
> hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
> cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
> code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
> this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13127) Correctly cache delegation tokens in KMSClientProvider

2016-05-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279166#comment-15279166
 ] 

Xiao Chen commented on HADOOP-13127:


[~asuresh], could you take a look? I attached patch 1 which I think fixes this.
Thanks in advance.

> Correctly cache delegation tokens in KMSClientProvider
> --
>
> Key: HADOOP-13127
> URL: https://issues.apache.org/jira/browse/HADOOP-13127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13127.01.patch
>
>
> In the initial implementation of HADOOP-10770, the authToken is updated with 
> delegation tokens during {{KMSClientProvider#addDelegationTokens }} in the 
> following line:
> {code}
> Token token = authUrl.getDelegationToken(url, authToken, renewer);
> {code}
> HADOOP-11482 is a good fix to handle UGI issue, but has a side effect in the 
> following code:
> {code}
> public Token run() throws Exception {
>   // Not using the cached token here.. Creating a new token here
>   // everytime.
>   return authUrl.getDelegationToken(url,
> new DelegationTokenAuthenticatedURL.Token(), renewer, doAsUser);
> }
> {code}
> IIUC, we should do {{setDelegationToken}} on the authToken here to cache it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13127) Correctly cache delegation tokens in KMSClientProvider

2016-05-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13127:
---
Status: Patch Available  (was: Open)

> Correctly cache delegation tokens in KMSClientProvider
> --
>
> Key: HADOOP-13127
> URL: https://issues.apache.org/jira/browse/HADOOP-13127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13127.01.patch
>
>
> In the initial implementation of HADOOP-10770, the authToken is updated with 
> delegation tokens during {{KMSClientProvider#addDelegationTokens }} in the 
> following line:
> {code}
> Token token = authUrl.getDelegationToken(url, authToken, renewer);
> {code}
> HADOOP-11482 is a good fix to handle UGI issue, but has a side effect in the 
> following code:
> {code}
> public Token run() throws Exception {
>   // Not using the cached token here.. Creating a new token here
>   // everytime.
>   return authUrl.getDelegationToken(url,
> new DelegationTokenAuthenticatedURL.Token(), renewer, doAsUser);
> }
> {code}
> IIUC, we should do {{setDelegationToken}} on the authToken here to cache it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13127) Correctly cache delegation tokens in KMSClientProvider

2016-05-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13127:
---
Attachment: HADOOP-13127.01.patch

> Correctly cache delegation tokens in KMSClientProvider
> --
>
> Key: HADOOP-13127
> URL: https://issues.apache.org/jira/browse/HADOOP-13127
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13127.01.patch
>
>
> In the initial implementation of HADOOP-10770, the authToken is updated with 
> delegation tokens during {{KMSClientProvider#addDelegationTokens }} in the 
> following line:
> {code}
> Token token = authUrl.getDelegationToken(url, authToken, renewer);
> {code}
> HADOOP-11482 is a good fix to handle UGI issue, but has a side effect in the 
> following code:
> {code}
> public Token run() throws Exception {
>   // Not using the cached token here.. Creating a new token here
>   // everytime.
>   return authUrl.getDelegationToken(url,
> new DelegationTokenAuthenticatedURL.Token(), renewer, doAsUser);
> }
> {code}
> IIUC, we should do {{setDelegationToken}} on the authToken here to cache it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13127) Correctly cache delegation tokens in KMSClientProvider

2016-05-10 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13127:
--

 Summary: Correctly cache delegation tokens in KMSClientProvider
 Key: HADOOP-13127
 URL: https://issues.apache.org/jira/browse/HADOOP-13127
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.1
Reporter: Xiao Chen
Assignee: Xiao Chen


In the initial implementation of HADOOP-10770, the authToken is updated with 
delegation tokens during {{KMSClientProvider#addDelegationTokens }} in the 
following line:
{code}
Token token = authUrl.getDelegationToken(url, authToken, renewer);
{code}

HADOOP-11482 is a good fix to handle UGI issue, but has a side effect in the 
following code:
{code}
public Token run() throws Exception {
  // Not using the cached token here.. Creating a new token here
  // everytime.
  return authUrl.getDelegationToken(url,
new DelegationTokenAuthenticatedURL.Token(), renewer, doAsUser);
}
{code}

IIUC, we should do {{setDelegationToken}} on the authToken here to cache it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12801) Suppress obsolete S3FileSystem tests.

2016-05-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12801:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8.  Mingliang and Steve, 
thank you for the code reviews.

> Suppress obsolete S3FileSystem tests.
> -
>
> Key: HADOOP-12801
> URL: https://issues.apache.org/jira/browse/HADOOP-12801
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-12801.001.patch
>
>
> There are several failures in the {{S3FileSystem}} tests that integrate with 
> the S3 back-end.  With {{S3FileSystem}} deprecation in progress, these tests 
> do not provide valuable feedback and cause noise in test runs for S3N bug 
> fixes and ongoing development of S3A.  We can suppress these obsolete tests 
> as a precursor to the deprecation and removal of {{S3FileSystem}} tracked in 
> HADOOP-12709.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7738) Document incompatible API changes between 0.20.20x and 0.23.0 release

2016-05-10 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved HADOOP-7738.
-
Resolution: Won't Fix

This is obviously too late to do. We keep moving this out between releases - 
there is little value in doing this now given how far behind 1.x is in the 
rear-view mirror.

I am going to close this one for now as 'Won't Fix' as part of the 2.8 JIRA 
cleanup. Please reopen if you disagree.

> Document incompatible API changes between 0.20.20x and 0.23.0 release
> -
>
> Key: HADOOP-7738
> URL: https://issues.apache.org/jira/browse/HADOOP-7738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tom White
>Assignee: Tom White
>Priority: Critical
> Attachments: apicheck-hadoop-0.20.204.0-0.24.0-SNAPSHOT.txt
>
>
> 0.20.20x to 0.23.0 will be a common upgrade path, so we should document any 
> incompatible API changes that will affect users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13123) Permit the default hadoop delegation token file format to be configurable

2016-05-10 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279085#comment-15279085
 ] 

Hitesh Shah commented on HADOOP-13123:
--

+1 on reverting the change if it did go into branch-2. 

> Permit the default hadoop delegation token file format to be configurable
> -
>
> Key: HADOOP-13123
> URL: https://issues.apache.org/jira/browse/HADOOP-13123
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
> Attachments: HADOOP-13123.01.patch
>
>
> If one environment updates to using the new dtutil code and accompanying 
> Credentials code, there is a backward compatibility issue with the default 
> file format being JAVA.  Older clients need to be updated to ask for a file 
> in the legacy format (FORMAT_JAVA).  
> As an aid to users in this trap, we can add a configuration property to set 
> the default file format.  When set to FORMAT_JAVA, the new server code will 
> serve up legacy files without being asked.  The default value for this 
> property will remain FORMAT_PB.  But affected users can add this config 
> option to the services using the newer code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-10 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.016.incompatible.patch

Some test failures are related, while some of them are not related.
Fixed patch-related test failures: TestRMWebServicesNodeLabels and 
TestRMWithCSRFFilter(v16).

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.014.incompatible.patch, HADOOP-9613.014.incompatible.patch, 
> HADOOP-9613.015.incompatible.patch, HADOOP-9613.016.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279065#comment-15279065
 ] 

Ravi Prakash commented on HADOOP-12563:
---

Haah! I stand corrected. It was never committed to branch-2. The Fix Version is 
correct

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279059#comment-15279059
 ] 

Hadoop QA commented on HADOOP-10980:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 34s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803298/HADOOP-10980.001.patch
 |
| JIRA Issue | HADOOP-10980 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 92484a38f867 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 025219b |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279049#comment-15279049
 ] 

Hudson commented on HADOOP-12982:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9741 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9741/])
HADOOP-12982 Document missing S3A and S3 properties. (Wei-Chiu Chuang (stevel: 
rev 025219b12fca8b106a30e5504d1ae08c569fb30d)
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279050#comment-15279050
 ] 

Hudson commented on HADOOP-12751:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9741 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9741/])
HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with 
(stevel: rev 829a2e4d271f05afb209ddc834cd4a0e85492eda)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestKDiag.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java


> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-10 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279047#comment-15279047
 ] 

Larry McCay commented on HADOOP-12942:
--

[~yoderme] - can you address the checkstyle and whitespace issues above?

> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch, 
> HADOOP-12942.006.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279036#comment-15279036
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
23s {color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
19s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 23s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 3s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 662 
unchanged - 0 fixed = 663 total (was 662) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 44s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 672 
unchanged - 0 fixed = 673 total (was 672) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 48s 
{color} | {color:red} root: The patch generated 4 new + 356 unchanged - 44 
fixed = 360 total (was 400) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-10 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15279034#comment-15279034
 ] 

Sangjin Lee commented on HADOOP-13070:
--

Got you. Thanks. I am comfortable with 3.0 being defined as primarily the java 
8 release. Once we have something ready (and along with HADOOP-11656), it would 
be good to get this in afterwards.

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278998#comment-15278998
 ] 

Hadoop QA commented on HADOOP-13126:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 45s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 33s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 33s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 38s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 38s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 21s 
{color} | {color:red} root: The patch generated 28 new + 0 unchanged - 0 fixed 
= 28 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 1m 47s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 22s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 46s {color} 
| {color:red} 

[jira] [Updated] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13125:
---
Hadoop Flags: Reviewed

+1 pending pre-commit.

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13125:
---
Status: Patch Available  (was: Open)

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278892#comment-15278892
 ] 

Wei-Chiu Chuang commented on HADOOP-12291:
--

Thanks. You're right about #3.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-10 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278878#comment-15278878
 ] 

Esther Kundin commented on HADOOP-12291:


Hi Wei.
1. I will add it in.
2. No, this is not compatible with posixGroup
3. The context is actually cached, the first line of 
{code}getDirContext(){code} is {code}if (ctx == null) {code}.  So I think it's 
fine the way it is.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-10 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13065:
---
Attachment: HADOOP-13065.013.patch

Thanks [~cmccabe] for the review.

The v13 patch is to address the null return value problem. Yes we should return 
null in this case, indicating the operation is not tracked. Zero-value is for 
the case where the operation is tracked by the counter is not yet incremented. 
This is a good catch.

For this use case request, the per-operation stats is per job and thus is 
shared among different filesystem instances. Based on the current stats design, 
we can further support per-instance stats as follow-on jiras if new use cases 
appear.

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13035) Add states INITING and STARTING to YARN Service model to cover in-transition states.

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278860#comment-15278860
 ] 

Steve Loughran commented on HADOOP-13035:
-

If we only care about a service saying whether it considers itself "live', then 
extening the Service interface with a ServiceWithLiveness one would let 
services set a bit when they considered themselves live. We could also have a 
standard "wait until live" mechanism (busy wait? Block on an object?) so that 
one thread could wait until another one felt that it was fully up and running...

> Add states INITING and STARTING to YARN Service model to cover in-transition 
> states.
> 
>
> Key: HADOOP-13035
> URL: https://issues.apache.org/jira/browse/HADOOP-13035
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
> Attachments: 0001-HADOOP-13035.patch, 0002-HADOOP-13035.patch, 
> 0003-HADOOP-13035.patch
>
>
> As per the discussion in YARN-3971 the we should be setting the service state 
> to STARTED only after serviceStart() 
> Currently {{AbstractService#start()}} is set
> {noformat} 
>  if (stateModel.enterState(STATE.STARTED) != STATE.STARTED) {
> try {
>   startTime = System.currentTimeMillis();
>   serviceStart();
> ..
>  }
> {noformat}
> enterState sets the service state to proposed state. So in 
> {{service.getServiceState}} in {{serviceStart()}} will return STARTED .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278861#comment-15278861
 ] 

Wei-Chiu Chuang commented on HADOOP-12982:
--

Thank you [~ste...@apache.org]! Didn't realize it needed to rebase. Also thanks 
to Eddy for reviewing it.

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278858#comment-15278858
 ] 

Steve Loughran commented on HADOOP-13070:
-


When i said "flipping the maven switch" I meant "switching the build to being 
Java 8+ only"

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2016-05-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278854#comment-15278854
 ] 

Colin Patrick McCabe commented on HADOOP-12975:
---

Thanks, [~eclark].

{code}
169 // add/subtract the jitter.
170 refreshInterval +=
171 ThreadLocalRandom.current()
172  .nextLong(jitter, jitter);
{code}
Hmm, is this a typo?  It seems like this is always going to return exactly 
'jitter' since the 'least' and the 'bound' arguments are the same?  That seems 
to defeat the point of randomization. 
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadLocalRandom.html#nextLong(long,%20long)

{code}
126 if (configuration == null) {
127   return DEFAULT_JITTER;
128 }
{code}
Can we throw an exception in {{GetSpaceUsed#build}} if {{conf == null}}?  It's 
a weird special case to have no {{Configuration}} object, and I'm not sure why 
we'd ever want to do that.  Then this function could just be {{return 
this.conf.getLong(JITTER_KEY, DEFAULT_JITTER);}}.

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278852#comment-15278852
 ] 

Steve Loughran commented on HADOOP-12982:
-

+1: committed to trunk as is, merged in to branch-2 with a bit of editing

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12982:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278837#comment-15278837
 ] 

Steve Loughran commented on HADOOP-12982:
-

the patch doesn't apply any more; could you bring it up to sync?



> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278838#comment-15278838
 ] 

Steve Loughran commented on HADOOP-12982:
-

cancel that last comment. it applies to trunk but not branch-2

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278833#comment-15278833
 ] 

Colin Patrick McCabe edited comment on HADOOP-13065 at 5/10/16 8:34 PM:


Thanks, [~liuml07].  {{DFSOpsCountStatistics}} is a nice implementation.  It's 
also nice to have this for webhdfs as well.

{code}
156  @Override
157   public Long getLong(String key) {
158 final OpType type = OpType.fromSymbol(key);
159 return type == null ? 0L : opsCount.get(type).get();
160   }
{code}
I think this should return null in the case where type == null, right?  
Indicating that there is no such statistic.

{code}
159 storageStatistics = (DFSOpsCountStatistics) 
GlobalStorageStatistics.INSTANCE
160 .put(DFSOpsCountStatistics.NAME,
161   new StorageStatisticsProvider() {
162 @Override
163 public StorageStatistics provide() {
164   return new DFSOpsCountStatistics();
165 }
166   });
{code}
Hmm, I wonder if these StorageStatistics objects should be per-FS-instance 
rather than per-class?  I guess let's do that in a follow-on, though, after 
this gets committed.

+1 for HADOOP-13065.012.patch once the null thing is fixed


was (Author: cmccabe):
Thanks, [~liuml07].  {{DFSOpsCountStatistics}} is a nice implementation.  It's 
also nice to have this for webhdfs as well.

{code}
156  @Override
157   public Long getLong(String key) {
158 final OpType type = OpType.fromSymbol(key);
159 return type == null ? 0L : opsCount.get(type).get();
160   }
{code}
I think this should return null in the case where type == null, right?  
Indicating that there is no such statistic.

{code}
159 storageStatistics = (DFSOpsCountStatistics) 
GlobalStorageStatistics.INSTANCE
160 .put(DFSOpsCountStatistics.NAME,
161   new StorageStatisticsProvider() {
162 @Override
163 public StorageStatistics provide() {
164   return new DFSOpsCountStatistics();
165 }
166   });
{code}
Hmm, I wonder if these StorageStatistics objects should be per-FS-instance 
rather than per-class?  I guess let's do that in a follow-on, though, after 
this gets committed.

+1 once the null thing is fixed

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HDFS-10175.000.patch, HDFS-10175.001.patch, 
> HDFS-10175.002.patch, HDFS-10175.003.patch, HDFS-10175.004.patch, 
> HDFS-10175.005.patch, HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278833#comment-15278833
 ] 

Colin Patrick McCabe commented on HADOOP-13065:
---

Thanks, [~liuml07].  {{DFSOpsCountStatistics}} is a nice implementation.  It's 
also nice to have this for webhdfs as well.

{code}
156  @Override
157   public Long getLong(String key) {
158 final OpType type = OpType.fromSymbol(key);
159 return type == null ? 0L : opsCount.get(type).get();
160   }
{code}
I think this should return null in the case where type == null, right?  
Indicating that there is no such statistic.

{code}
159 storageStatistics = (DFSOpsCountStatistics) 
GlobalStorageStatistics.INSTANCE
160 .put(DFSOpsCountStatistics.NAME,
161   new StorageStatisticsProvider() {
162 @Override
163 public StorageStatistics provide() {
164   return new DFSOpsCountStatistics();
165 }
166   });
{code}
Hmm, I wonder if these StorageStatistics objects should be per-FS-instance 
rather than per-class?  I guess let's do that in a follow-on, though, after 
this gets committed.

+1 once the null thing is fixed

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, 
> HADOOP-13065.012.patch, HDFS-10175.000.patch, HDFS-10175.001.patch, 
> HDFS-10175.002.patch, HDFS-10175.003.patch, HDFS-10175.004.patch, 
> HDFS-10175.005.patch, HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13118) Fix IOUtils#cleanup and IOUtils#closeStream javadoc

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278809#comment-15278809
 ] 

Wei-Chiu Chuang commented on HADOOP-13118:
--

Thanks [~ajisakaa] for the quick review and commit!

> Fix IOUtils#cleanup and IOUtils#closeStream javadoc
> ---
>
> Key: HADOOP-13118
> URL: https://issues.apache.org/jira/browse/HADOOP-13118
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-13118.001.patch
>
>
> HADOOP-7256 ignored all {{Throwable}}s in IOUtils#cleanup but did not update 
> its javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278800#comment-15278800
 ] 

Colin Patrick McCabe commented on HADOOP-13028:
---

bq. Patrick: regarding fs.s3a.readahead.range versus calling it 
fs.s3a.readahead.default, I think "default" could be a bit confusing too. How 
about I make it clear that the if setReadahead() is set, then it supercedes any 
previous value?

Sure.

bq. I absolutely need that printing in there, otherwise the value of this patch 
is significantly reduced. If you want me to add a line like "WARNING: UNSTABLE" 
or something to that string value, I'm happy to do so. Or the output is 
published in a way that is deliberately hard to parse by machine but which we 
humans can read. But without that information, we can't so easily tell which

Perhaps I'm missing something, but why not just do this in 
{{S3AInstrumentation#InputStreamStatistics#toString}}?  I don't see why this is 
"absolutely needed" in {{S3AInputStream#toString}}.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> HADOOP-13028-branch-2-010.patch, HADOOP-13028-branch-2-011.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278784#comment-15278784
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

FYI: I tested the latest code on a CDH 5.7 cluster and it works as expected.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: HADOOP-12847.004.patch

Rev04: caught two bugs: print usage if the command has no arguments, or if the 
first argument is not either {{-getlevel}} or {{-setlevel}}

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-10 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278775#comment-15278775
 ] 

Hitesh Shah commented on HADOOP-12563:
--

[~raviprak] Mind updating the fix versions to denote which version of 2.x this 
commit went into. 

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2016-05-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-10980:
-
Status: Patch Available  (was: Reopened)

> TestActiveStandbyElector fails occasionally in trunk
> 
>
> Key: HADOOP-10980
> URL: https://issues.apache.org/jira/browse/HADOOP-10980
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Eric Badger
>Priority: Minor
> Attachments: HADOOP-10980.001.patch
>
>
> From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
> {code}
> Running org.apache.hadoop.ha.TestActiveStandbyElector
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
> FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
> testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
> elapsed: 0.051 sec  <<< FAILURE!
> java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2016-05-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-10980:
-
Attachment: HADOOP-10980.001.patch

Attaching patch to change the default port, so that we do not connect to a 
zookeeper instance that is running by coincidence. 

> TestActiveStandbyElector fails occasionally in trunk
> 
>
> Key: HADOOP-10980
> URL: https://issues.apache.org/jira/browse/HADOOP-10980
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Eric Badger
>Priority: Minor
> Attachments: HADOOP-10980.001.patch
>
>
> From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
> {code}
> Running org.apache.hadoop.ha.TestActiveStandbyElector
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
> FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
> testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
> elapsed: 0.051 sec  <<< FAILURE!
> java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2016-05-10 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger reopened HADOOP-10980:
--
  Assignee: Eric Badger

This test failed for me locally when I was running a separate zookeeper 
instance. If we specify a port number, as [~ste...@apache.org] suggested, the 
test passes. For example, changing the port to 22, since that will likely only 
ever be used for SSH. 

> TestActiveStandbyElector fails occasionally in trunk
> 
>
> Key: HADOOP-10980
> URL: https://issues.apache.org/jira/browse/HADOOP-10980
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Eric Badger
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
> {code}
> Running org.apache.hadoop.ha.TestActiveStandbyElector
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
> FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
> testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
> elapsed: 0.051 sec  <<< FAILURE!
> java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI authorization error accessing /logs/ when Kerberos

2016-05-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278715#comment-15278715
 ] 

Daniel Templeton commented on HADOOP-13119:
---

I am able to replicate the issue on a secure cluster.

Should this JIRA move to the HDFS project since it appears to be specifically a 
namenode UI issue?

> Web UI authorization error accessing /logs/ when Kerberos
> -
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is used

2016-05-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13022:

Summary: S3 MD5 check fails on Server Side Encryption-KMS with AWS and 
default key is used  (was: S3 MD5 check fails on Server Side Encryption with 
AWS and default key is used)

> S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is 
> used
> -
>
> Key: HADOOP-13022
> URL: https://issues.apache.org/jira/browse/HADOOP-13022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Leonardo Contreras
>
> When server side encryption with "aws:kms" value and no custom key is used in 
> S3A Filesystem, the AWSClient fails when verifing Md5:
> {noformat}
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> verify integrity of data upload.  Client calculated content hash (contentMD5: 
> 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
> c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may 
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
> md5DigestStream: 
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
> bucketName: abuse-messages-nonprod, key: 
> venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
>   at scala.Option.map(Option.scala:145)
>   at 
> org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
>   at 
> org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13123) Permit the default hadoop delegation token file format to be configurable

2016-05-10 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-13123:
-
Attachment: HADOOP-13123.01.patch

In case it helps anyone, I attached a patch that permits one to select the 
default behavior in writeTokenStorageToStream via a config prop.  feel free to 
commit, ignore or use.

> Permit the default hadoop delegation token file format to be configurable
> -
>
> Key: HADOOP-13123
> URL: https://issues.apache.org/jira/browse/HADOOP-13123
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
> Attachments: HADOOP-13123.01.patch
>
>
> If one environment updates to using the new dtutil code and accompanying 
> Credentials code, there is a backward compatibility issue with the default 
> file format being JAVA.  Older clients need to be updated to ask for a file 
> in the legacy format (FORMAT_JAVA).  
> As an aid to users in this trap, we can add a configuration property to set 
> the default file format.  When set to FORMAT_JAVA, the new server code will 
> serve up legacy files without being asked.  The default value for this 
> property will remain FORMAT_PB.  But affected users can add this config 
> option to the services using the newer code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13118) Fix IOUtils#cleanup and IOUtils#closeStream javadoc

2016-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278691#comment-15278691
 ] 

Hudson commented on HADOOP-13118:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9740 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9740/])
HADOOP-13118. Fix IOUtils#cleanup and IOUtils#closeStream javadoc. (aajisaka: 
rev 0f0c6415af409d213e7a132390a850c1251b92ef)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java


> Fix IOUtils#cleanup and IOUtils#closeStream javadoc
> ---
>
> Key: HADOOP-13118
> URL: https://issues.apache.org/jira/browse/HADOOP-13118
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-13118.001.patch
>
>
> HADOOP-7256 ignored all {{Throwable}}s in IOUtils#cleanup but did not update 
> its javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278673#comment-15278673
 ] 

Ryan Blue commented on HADOOP-13126:


The results above show the comparison with Snappy. The file is less than half 
the size and compression took about the same amount of time. Comparing to LZ4 
would be interesting. It isn't supported by Parquet so it's a bit harder for me 
to drop into my test case.

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278666#comment-15278666
 ] 

Tsuyoshi Ozawa commented on HADOOP-13126:
-

[~b...@cloudera.com] Thank you for the suggestion. Should we compare with 
snappy or lz4 codec instead of gzip since these codecs are de fact standard of 
Hadoop stack? 

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278659#comment-15278659
 ] 

Ryan Blue commented on HADOOP-13126:


[~andrew.wang], you guys are probably interested in this.

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278653#comment-15278653
 ] 

Ryan Blue commented on HADOOP-13126:


[~marki], could you review this patch also?

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HADOOP-13126:
---
Attachment: (was: HADOOP-13126.1.patch)

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HADOOP-13126:
---
Attachment: HADOOP-13126.1.patch

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-05-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278619#comment-15278619
 ] 

Ravi Prakash commented on HADOOP-12563:
---

FWIW! I'm +1 on removing it from branch-2 and labeling it an incompatible fix. 
Sorry about the breakage folks..

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HADOOP-13126:
---
Status: Patch Available  (was: Open)

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue updated HADOOP-13126:
---
Attachment: HADOOP-13126.1.patch

> Add Brotli compression codec
> 
>
> Key: HADOOP-13126
> URL: https://issues.apache.org/jira/browse/HADOOP-13126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Ryan Blue
>Assignee: Ryan Blue
> Attachments: HADOOP-13126.1.patch
>
>
> I've been testing [Brotli|https://github.com/google/brotli/], a new 
> compression library based on LZ77 from Google. Google's [brotli 
> benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
>  look really good and we're also seeing a significant improvement in 
> compression size, compression speed, or both.
> {code:title=Brotli preliminary test results}
> [blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
> --compression-codec snappy --overwrite  
> real1m17.106s
> user1m30.804s
> sys 0m4.404s
> [blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
> --compression-codec brotli --overwrite 
> real1m16.640s
> user1m24.244s
> sys 0m6.412s
> [blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
> --compression-codec gzip --overwrite
> real3m39.496s
> user3m48.736s
> sys 0m3.880s
> [blue@work Downloads]$ ls -l
> -rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
> -rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
> -rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
> {code}
> Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
> Another test resulted in a slightly larger Brotli file than gzip produced, 
> but Brotli was 4x faster. I'd like to get this compression codec into Hadoop.
> [Brotli is licensed with the MIT 
> license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
> library jbrotli is 
> ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278629#comment-15278629
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

Found bug in the code when {{hadoop daemonlog}} has no parameters. I'll update 
the patch to check for that later.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13123) Permit the default hadoop delegation token file format to be configurable

2016-05-10 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278623#comment-15278623
 ] 

Ravi Prakash commented on HADOOP-13123:
---

I'm a +1 on reverting the change in branch-2 since it breaks rolling upgrades 
(and we can't do that in minor version upgrades). Could someone else please 
also +1 this proposal and I'll be happy to revert.

> Permit the default hadoop delegation token file format to be configurable
> -
>
> Key: HADOOP-13123
> URL: https://issues.apache.org/jira/browse/HADOOP-13123
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>
> If one environment updates to using the new dtutil code and accompanying 
> Credentials code, there is a backward compatibility issue with the default 
> file format being JAVA.  Older clients need to be updated to ask for a file 
> in the legacy format (FORMAT_JAVA).  
> As an aid to users in this trap, we can add a configuration property to set 
> the default file format.  When set to FORMAT_JAVA, the new server code will 
> serve up legacy files without being asked.  The default value for this 
> property will remain FORMAT_PB.  But affected users can add this config 
> option to the services using the newer code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13126) Add Brotli compression codec

2016-05-10 Thread Ryan Blue (JIRA)
Ryan Blue created HADOOP-13126:
--

 Summary: Add Brotli compression codec
 Key: HADOOP-13126
 URL: https://issues.apache.org/jira/browse/HADOOP-13126
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Ryan Blue
Assignee: Ryan Blue


I've been testing [Brotli|https://github.com/google/brotli/], a new compression 
library based on LZ77 from Google. Google's [brotli 
benchmarks|https://cran.r-project.org/web/packages/brotli/vignettes/brotli-2015-09-22.pdf]
 look really good and we're also seeing a significant improvement in 
compression size, compression speed, or both.

{code:title=Brotli preliminary test results}
[blue@work Downloads]$ time parquet from test.parquet -o test.snappy.parquet 
--compression-codec snappy --overwrite  

real1m17.106s
user1m30.804s
sys 0m4.404s

[blue@work Downloads]$ time parquet from test.parquet -o test.br.parquet 
--compression-codec brotli --overwrite 

real1m16.640s
user1m24.244s
sys 0m6.412s

[blue@work Downloads]$ time parquet from test.parquet -o test.gz.parquet 
--compression-codec gzip --overwrite

real3m39.496s
user3m48.736s
sys 0m3.880s

[blue@work Downloads]$ ls -l
-rw-r--r-- 1 blue blue 1068821936 May 10 11:06 test.br.parquet
-rw-r--r-- 1 blue blue 1421601880 May 10 11:10 test.gz.parquet
-rw-r--r-- 1 blue blue 2265950833 May 10 10:30 test.snappy.parquet
{code}

Brotli, at quality 1, is as fast as snappy and ends up smaller than gzip-9. 
Another test resulted in a slightly larger Brotli file than gzip produced, but 
Brotli was 4x faster. I'd like to get this compression codec into Hadoop.

[Brotli is licensed with the MIT 
license|https://github.com/google/brotli/blob/master/LICENSE], and the [JNI 
library jbrotli is 
ALv2|https://github.com/MeteoGroup/jbrotli/blob/master/LICENSE].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-05-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278614#comment-15278614
 ] 

Wei-Chiu Chuang commented on HADOOP-12982:
--

Hi [~eddyxu] and [~steve_l] would you mind to review this patch again if you 
got a chance? Thank you!

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3, tools
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch, HADOOP-12982.002.patch, 
> HADOOP-12982.003.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13125:

Attachment: HADOOP-13125-001.patch

a more helpful exception message

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13125-001.patch
>
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278598#comment-15278598
 ] 

Steve Loughran commented on HADOOP-13125:
-

after
{code}
testReadNullBuffer(org.apache.hadoop.fs.contract.s3a.TestS3AContractSeek)  Time 
elapsed: 0.043 sec  <<< ERROR!
java.io.IOException: Unable to initialize filesystem s3a://tests3neu/: 
java.lang.IllegalArgumentException: unknown signer type: AES256
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:76)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:165)
at 
org.apache.hadoop.fs.contract.AbstractContractSeekTest.setup(AbstractContractSeekTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.IllegalArgumentException: unknown signer type: AES256
at com.amazonaws.auth.SignerFactory.createSigner(SignerFactory.java:118)
at 
com.amazonaws.auth.SignerFactory.getSignerByTypeAndService(SignerFactory.java:95)
at 
com.amazonaws.AmazonWebServiceClient.computeSignerByServiceRegion(AmazonWebServiceClient.java:321)
at 
com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:294)
at 
com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:170)
at 
com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:519)
at 
com.amazonaws.services.s3.AmazonS3Client.init(AmazonS3Client.java:492)
at 
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:436)
at 
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:416)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initAmazonS3Client(S3AFileSystem.java:325)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:221)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2785)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2822)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2804)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:165)
at 
org.apache.hadoop.fs.contract.AbstractContractSeekTest.setup(AbstractContractSeekTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)

{code}

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278597#comment-15278597
 ] 

Steve Loughran commented on HADOOP-13125:
-

Before

{code}testDeleteNonEmptyDirRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete)
  Time elapsed: 0.03 sec  <<< ERROR!
java.io.IOException: Invalid URI s3a://tests/
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:76)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.IllegalArgumentException: unknown signer type: AES256
at com.amazonaws.auth.SignerFactory.createSigner(SignerFactory.java:118)
at 
com.amazonaws.auth.SignerFactory.getSignerByTypeAndService(SignerFactory.java:95)
at 
com.amazonaws.AmazonWebServiceClient.computeSignerByServiceRegion(AmazonWebServiceClient.java:321)
at 
com.amazonaws.AmazonWebServiceClient.computeSignerByURI(AmazonWebServiceClient.java:294)
at 
com.amazonaws.AmazonWebServiceClient.setEndpoint(AmazonWebServiceClient.java:170)
at 
com.amazonaws.services.s3.AmazonS3Client.setEndpoint(AmazonS3Client.java:519)
at 
com.amazonaws.services.s3.AmazonS3Client.init(AmazonS3Client.java:492)
at 
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:436)
at 
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:416)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initAmazonS3Client(S3AFileSystem.java:325)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:221)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2785)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2822)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2804)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:381)
at 
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

> FS Contract tests don't report FS initialization errors well
> 
>
> Key: HADOOP-13125
> URL: https://issues.apache.org/jira/browse/HADOOP-13125
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> If the {{FileSystem.initialize()}} method of an FS fails with an 
> IllegalArgumentException, the FS contract tests assume this is related to the 
> URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the 
> real cause only as error text in the nested exception.
> the exception text should be included in the message thrown, and cause 
> changed to indicate that it's initialization related.



--
This 

[jira] [Created] (HADOOP-13125) FS Contract tests don't report FS initialization errors well

2016-05-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13125:
---

 Summary: FS Contract tests don't report FS initialization errors 
well
 Key: HADOOP-13125
 URL: https://issues.apache.org/jira/browse/HADOOP-13125
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


If the {{FileSystem.initialize()}} method of an FS fails with an 
IllegalArgumentException, the FS contract tests assume this is related to the 
URI syntax and raises a new exception 'invalid URI'+ fsURI, retaining the real 
cause only as error text in the nested exception.

the exception text should be included in the message thrown, and cause changed 
to indicate that it's initialization related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278583#comment-15278583
 ] 

Steve Loughran commented on HADOOP-13075:
-

There's currently no tests that SSE-S3 works; something there will be needed as 
part of this patch —for regression testing.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Andrew Olson
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13122) Customize User-Agent header sent in HTTP requests by S3A.

2016-05-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278543#comment-15278543
 ] 

Steve Loughran commented on HADOOP-13122:
-

I'm happy then; the only thing we are exposing there is the the java version. 
From a normal browser that and the flash version are enumerating your 
vulnerabilities to all. Here: you'd better trust your endpoint, and if you are 
using the https connection to S3, you get that.

+1

> Customize User-Agent header sent in HTTP requests by S3A.
> -
>
> Key: HADOOP-13122
> URL: https://issues.apache.org/jira/browse/HADOOP-13122
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13122.001.patch
>
>
> S3A passes a User-Agent header to the S3 back-end.  Right now, it uses the 
> default value set by the AWS SDK, so Hadoop HTTP traffic doesn't appear any 
> different from general AWS SDK traffic.  If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by AWS or 
> alternative providers of S3-like services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13122) Customize User-Agent header sent in HTTP requests by S3A.

2016-05-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13122:
---
Status: Patch Available  (was: Open)

> Customize User-Agent header sent in HTTP requests by S3A.
> -
>
> Key: HADOOP-13122
> URL: https://issues.apache.org/jira/browse/HADOOP-13122
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13122.001.patch
>
>
> S3A passes a User-Agent header to the S3 back-end.  Right now, it uses the 
> default value set by the AWS SDK, so Hadoop HTTP traffic doesn't appear any 
> different from general AWS SDK traffic.  If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by AWS or 
> alternative providers of S3-like services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13112) Change CredentialShell to use CommandShell base class

2016-05-10 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-13112:
-
Attachment: HADOOP-13112.03.patch

for patch #03
 .../org/apache/hadoop/crypto/key/KeyShell.java | 230 -
 .../hadoop/security/alias/CredentialShell.java | 155 +-
 .../java/org/apache/hadoop/tools/CommandShell.java |   4 +-
 3 files changed, 144 insertions(+), 245 deletions(-)
---
 T E S T S
---
Running org.apache.hadoop.crypto.key.TestKeyShell
  Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
  Time elapsed: 1.308 sec
Running org.apache.hadoop.security.alias.TestCredShell
  Tests run: 9, Failures: 0, Errors: 0, Skipped: 0
  Time elapsed: 0.841 sec
Running org.apache.hadoop.security.token.TestDtUtilShell
  Tests run: 8, Failures: 0, Errors: 0, Skipped: 0
  Time elapsed: 1.012 sec
Running org.apache.hadoop.tools.TestCommandShell
  Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
  Time elapsed: 0.128 sec
Results: Tests run: 27, Failures: 0, Errors: 0, Skipped: 0

> Change CredentialShell to use CommandShell base class
> -
>
> Key: HADOOP-13112
> URL: https://issues.apache.org/jira/browse/HADOOP-13112
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>Priority: Minor
> Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch, 
> HADOOP-13112.03.patch
>
>
> org.apache.hadoop.tools.CommandShell is a base class created for use by 
> DtUtilShell.  It was inspired by CredentialShell and much of it was taken 
> verbatim.  It should be a simple change to get CredentialShell to use the 
> base class and simplify its code without changing its functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278473#comment-15278473
 ] 

Hadoop QA commented on HADOOP-13028:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 59s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s 
{color} | {color:red} root: The patch generated 40 new + 46 unchanged - 55 
fixed = 86 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_91 with JDK v1.8.0_91 
generated 1 new + 0 unchanged - 8 fixed = 1 total (was 8) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 0s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 

[jira] [Updated] (HADOOP-13118) Fix IOUtils#cleanup and IOUtils#closeStream javadoc

2016-05-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13118:
---
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~jojochuang] for the contribution.

> Fix IOUtils#cleanup and IOUtils#closeStream javadoc
> ---
>
> Key: HADOOP-13118
> URL: https://issues.apache.org/jira/browse/HADOOP-13118
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-13118.001.patch
>
>
> HADOOP-7256 ignored all {{Throwable}}s in IOUtils#cleanup but did not update 
> its javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13118) Fix IOUtils#cleanup and IOUtils#closeStream javadoc

2016-05-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278447#comment-15278447
 ] 

Akira AJISAKA commented on HADOOP-13118:


Thanks [~jojochuang]. I backported HADOOP-7256 to branch-2, so I'll commit this 
to trunk/branch-2 shortly.

> Fix IOUtils#cleanup and IOUtils#closeStream javadoc
> ---
>
> Key: HADOOP-13118
> URL: https://issues.apache.org/jira/browse/HADOOP-13118
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: supportability
> Attachments: HADOOP-13118.001.patch
>
>
> HADOOP-7256 ignored all {{Throwable}}s in IOUtils#cleanup but did not update 
> its javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7256) Resource leak during failure scenario of closing of resources.

2016-05-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7256:
--
Fix Version/s: 2.9.0

Backported to branch-2.

> Resource leak during failure scenario of closing of resources. 
> ---
>
> Key: HADOOP-7256
> URL: https://issues.apache.org/jira/browse/HADOOP-7256
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HADOOP-7256-patch-1.patch, HADOOP-7256-patch-2.patch, 
> HADOOP-7256.patch
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Problem Statement:
> ===
> There are chances of resource leak and stream not getting closed 
> Take the case when after copying data we try to close the Input and output 
> stream followed by closing of the socket.
> Suppose an exception occurs while closing the input stream(due to runtime 
> exception) then the subsequent operations of closing the output stream and 
> socket may not happen and there is a chance of resource leak.
> Scenario 
> ===
> During long run of map reduce jobs, the copyFromLocalFile() api is getting 
> called.
> Here we found some exceptions happening. As a result of this we found the 
> lsof value raising leading to resource leak.
> Solution:
> ===
> While doing a close operation of any resource catch the RuntimeException also 
> rather than catching the IOException alone.
> Additionally there are places where we try to close a resource in the catch 
> block.
> If this close fails, we just throw and come out of the current flow.
> In order to avoid this, we can carry out the close operation in the finally 
> block.
> Probable reasons for getting RunTimeExceptions:
> =
> We may get runtime exception from customised hadoop streams like 
> FSDataOutputStream.close() . So better to handle RunTimeExceptions also.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Attachment: HADOOP-13079.003.patch

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Status: Patch Available  (was: In Progress)

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Attachment: HDFS-13079.003.patch

Please review patch 003:
* Update hadoop-common/src/test/resources/testConf.xml to fix TestCLI failure 

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Attachment: (was: HDFS-13079.003.patch)

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13070) classloading isolation improvements for cleaner and stricter dependencies

2016-05-10 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13070:
-
Attachment: classloading-improvements-ideas-v.3.pdf

Updated the doc (v.3).

Updated the rule realizing that it is better to block user classes from loading 
parent classes in all cases. Also added an idea on how we might accomplish that.

> classloading isolation improvements for cleaner and stricter dependencies
> -
>
> Key: HADOOP-13070
> URL: https://issues.apache.org/jira/browse/HADOOP-13070
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: classloading-improvements-ideas-v.3.pdf, 
> classloading-improvements-ideas.pdf, classloading-improvements-ideas.v.2.pdf
>
>
> Related to HADOOP-11656, we would like to make a number of improvements in 
> terms of classloading isolation so that user-code can run safely without 
> worrying about dependency collisions with the Hadoop dependencies.
> By the same token, it should raised the quality of the user code and its 
> specified classpath so that users get clear signals if they specify incorrect 
> classpaths.
> This will contain a proposal that will include several improvements some of 
> which may not be backward compatible. As such, it should be targeted to the 
> next major revision of Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-05-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Status: In Progress  (was: Patch Available)

Investigate TestCLI failure.

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-05-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15278308#comment-15278308
 ] 

Hadoop QA commented on HADOOP-12666:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} 

  1   2   >