[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.08.patch

Attach a new patch v08 to get a clean Jenkins run. 
Delta of changes: Adding a new unit test ensuring DecayRpcScheduler works 
without FCQ such as LinkedBlockingQueue.

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch, HADOOP-12916.07.patch, 
> HADOOP-12916.08.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11334) Mapreduce Job Failed due to failure fetching mapper output on the reduce side

2016-03-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219268#comment-15219268
 ] 

Weiwei Yang commented on HADOOP-11334:
--

Agree with Eric, we should get this fixed. At least implement option 2 so it 
won't fail in this ugly way it does today.

> Mapreduce Job Failed due to failure fetching mapper output on the reduce side
> -
>
> Key: HADOOP-11334
> URL: https://issues.apache.org/jira/browse/HADOOP-11334
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.4.1
>Reporter: Jinghui Wang
>Assignee: Yuanbo Liu (Yuan Bo Liu)
>
> Running terasort with the following options hadoop jar 
> hadoop-mapreduce-examples.jar terasort *-Dio.native.lib.available=false 
> -Dmapreduce.map.output.compress=true 
> -Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.GzipCodec*
>   /tmp/tera-in /tmp/tera-out
> The job failed with the reducer failed to fetch the output from mappers (see 
> the following stacktrace). The problem is that in JIRA MAPREDUCE-1784, it 
> added support to handle null compressors to default to non-compressed output. 
> In this case, when the *io.native.lib.available* is set to false, the 
> compressor will be null. However, the decompressor has a Java implementation, 
> so when the reducer tries to read the mapper output, it uses the 
> decompressor, but the output does not have the Gzip header.
> 2014-11-25 10:39:48,108 WARN [fetcher#9] 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher: Failed to shuffle output of 
> attempt_1416875111322_0005_m_02_0 from bdvs130:13562
> java.io.IOException: not a gzip file
>   at 
> org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:495)
>   at 
> org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:256)
>   at 
> org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:185)
>   at 
> org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91)
>   at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:434)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:341)
>   at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219212#comment-15219212
 ] 

Hadoop QA commented on HADOOP-12563:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} root: patch generated 0 new + 6 unchanged - 27 
fixed = 6 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 25s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 34s 

[jira] [Commented] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219139#comment-15219139
 ] 

Hadoop QA commented on HADOOP-12981:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} root: patch generated 0 new + 0 unchanged - 5 fixed 
= 0 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 51s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 52s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} 

[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219138#comment-15219138
 ] 

Gary Helmling commented on HADOOP-12973:


Probably should add {{InterfaceAudience}} and {{InterfaceStability}} 
annotations to the {{WindowsDU}} class.  Otherwise lgtm.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Attachment: (was: HADOOP-12950.04.patch)

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch, HADOOP-12950.04.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Attachment: HADOOP-12950.04.patch

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch, HADOOP-12950.04.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Attachment: HADOOP-12950.04.patch

Thanks [~jingzhao]! Patch v04 attached based on your comments. 

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch, HADOOP-12950.04.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11858) [JDK8] Set minimum version of Hadoop 3 to JDK 8

2016-03-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219083#comment-15219083
 ] 

Akira AJISAKA commented on HADOOP-11858:


bq. Given its 2016, is it time to raise this topic on the dev@ list again?
I think yes. As Vinod said, we should get consensus first.

> [JDK8] Set minimum version of Hadoop 3 to JDK 8
> ---
>
> Key: HADOOP-11858
> URL: https://issues.apache.org/jira/browse/HADOOP-11858
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-11858.001.patch, HADOOP-11858.002.patch
>
>
> Set minimum version of trunk to JDK 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2016-03-30 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10584:
--
Target Version/s: 2.9.0  (was: 2.7.3)

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Critical
> Attachments: hadoop-10584-prelim.patch, rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Attachment: HADOOP-12981.001.patch

v01: removed all s3native.* properties in core-default.xml and removed 
{{S3NativeFileSystemConfigKeys}} class.

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Status: Patch Available  (was: Open)

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
> Attachments: HADOOP-12981.001.patch
>
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-12981:


Assignee: Wei-Chiu Chuang

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12982) Document missing S3A and S3 properties

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219038#comment-15219038
 ] 

Hadoop QA commented on HADOOP-12982:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 36s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 54s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL 

[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-03-30 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12563:
--
Attachment: HADOOP-12563.07.patch

(re-attaching 07 so that precommit can see it since it only looks at the last 
file...)

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219018#comment-15219018
 ] 

Hadoop QA commented on HADOOP-12973:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} root: patch generated 0 new + 21 unchanged - 1 fixed 
= 21 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 35s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 23s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 16s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 191m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed 

[jira] [Created] (HADOOP-12983) Remove/deprecate s3 properties from S3FileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12983:


 Summary: Remove/deprecate s3 properties from 
S3FileSystemConfigKeys and core-default.xml
 Key: HADOOP-12983
 URL: https://issues.apache.org/jira/browse/HADOOP-12983
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Wei-Chiu Chuang
Priority: Minor


Similar to HADOOP-12981, there are a few S3 defunct properties, except for 
{{s3.stream-buffer-size}}.

{noformat}



  s3.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3.stream-buffer-size



  s3.client-write-packet-size
  65536
  Packet size for clients to write



  s3.blocksize
  67108864
  Block size



  s3.replication
  3
  Replication factor

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218999#comment-15218999
 ] 

Jing Zhao commented on HADOOP-12950:


Thanks for updating the patch, Xiaoyu! The 03 patch looks good to me. Just some 
nits:
# How about simplifying the following code to 
{{HadoopExecutors.newSingleThreadExecutor(new 
ThreadFactoryBuilder().setDaemon(true).build())}}?
{code}
55HadoopExecutors.newSingleThreadExecutor(new ThreadFactory() {
56  @Override
57  public Thread newThread(Runnable r) {
58Thread t =  new Thread(r);
59t.setDaemon(true);
60return t;
61  }
62});
{code}
# HookEntry's constructor/getter methods do not need to be public
# {{ShutdownHookManager#hooks}} can be declared as final.
# In TestShutdownHookManager, need to clean the spaces and new lines for the 
following code:
{code}
  LOG.info("Shutdown hook3 interrupted exception:" ,ExceptionUtils
  .getStackTrace
  (ex));
{code}

+1 after addressing the comments.

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-03-30 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: dtutil-test-out

dtutil-test-out is a capture (via set -x) of testing dtutil against a dev 
hadoop instance.  useful as manual test and example syntax and commands.

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, dtutil-test-out, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-03-30 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: HADOOP-12563.07.patch

additions:
   - better unit test coverage
   - make fetchdt use legacy output API
   - improve print function

tested via:
   - unittests
   - test-patch
   - manual test of fetchdt token
   - manual test of aliased token file from external host, e.g.
on host A (10.0.2.28):
hadoop dtutil get hdfs://localhost:9000/ -alias 10.0.2.28:9000 manual_test_alias
ssh -L 10.0.2.28:9000:127.0.0.1:9000 10.0.2.28

on host B (e.g. 10.0.2.24):
scp 10.0.2.28:/home/mattp/manual_test_alias .
HADOOP_TOKEN_FILE_LOCATION=/Users/mattp/dev/HADOOP/hadoop/manual_test_alias 
hadoop fs -ls hdfs://10.0.2.28:9000/user

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12972) Lz4Compressor#getLibraryName returns the wrong version number

2016-03-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218979#comment-15218979
 ] 

John Zhuge commented on HADOOP-12972:
-

Hi [~cmccabe],

Agree with you on that the library version should be printed since we only 
bundles the library code, thus ok with your patch.

{{programs/lz4cli.c}} is in {{github.com:Cyan4973/lz4.git}}. In order to 
correlate Hadoop lz4 library with OS lz4 command, things get a little messy 
because there are actually 3 "versions": git tag, cli version, and library 
version. Here is a table of these 3 columns from 
{{github.com:Cyan4973/lz4.git}}:
{code}
$ lz4vers 
   tag   cliver   libver
  lz4-r130 r1281.7.0
  r116   v1.1.51.1.3
  r117   v1.1.51.1.3
  r118   v1.2.01.2.0
  r119   v1.2.01.2.0
  r120   v1.2.01.3.0
  r121   v1.2.01.3.0
  r122 r1221.3.0
  r123 r1221.3.1
  r124 r1221.4.0
  r125 r1251.4.1
  r126 r1261.5.0
  r127 r1261.5.0
  r128 r1281.6.0
  r129 r1281.7.0
  r130 r1281.7.0
  r131 r1281.7.1
   rc129v0 r1281.7.0
{code}

This seems to be the rules of versioning: cli and lib versions can be bumped 
independently; if one of them is bumped, tag is bumped; if neither of them is 
bumped, tag may still be bumped.

IMO, the correct fix is for {{lz4 -h}} to display lib version in addition to 
cli version. Created https://github.com/Cyan4973/lz4/issues/192.

> Lz4Compressor#getLibraryName returns the wrong version number
> -
>
> Key: HADOOP-12972
> URL: https://issues.apache.org/jira/browse/HADOOP-12972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12972.001.patch
>
>
> HADOOP-11184 updated lz4 to "r123", but {{hadoop checknative -a}} still 
> prints "revision:99".
> {code}
> $ hadoop checknative -a
> 16/03/29 11:42:40 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
> native-bzip2 library system-native
> 16/03/29 11:42:40 INFO zlib.ZlibFactory: Successfully loaded & initialized 
> native-zlib library
> Native library checking:
> hadoop:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218971#comment-15218971
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 11m 47s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
20s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 522 unchanged - 45 fixed = 522 total (was 567) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 22s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 55s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 31s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 101m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Issue | HADOOP-12916 |
| GITHUB PR | https://github.com/apache/hadoop/pull/86 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0d7712a39e4f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218967#comment-15218967
 ] 

Xiaoyu Yao commented on HADOOP-12950:
-

Thanks [~jingzhao]. Patch v03 has been posted and passed the Jenkins. 
TestNativeLibraryChecker issue is unrelated.

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12982) Document missing S3A and S3 properties

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12982:
-
Status: Patch Available  (was: Open)

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218960#comment-15218960
 ] 

Jing Zhao commented on HADOOP-12950:


Yes, sound good to me.

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218961#comment-15218961
 ] 

Wei-Chiu Chuang commented on HADOOP-12886:
--

Thanks [~zhz] for multiple rounds of reviewing and committing the patch!

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Fix For: 2.8.0
>
> Attachments: HADOOP-12886.001.patch, HADOOP-12886.002.patch, 
> HADOOP-12886.003.patch, HADOOP-12886.004.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12982) Document missing S3A and S3 properties

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12982:
-
Attachment: HADOOP-12982.001.patch

v01: added missing S3/S3A properties into documentation and core-default.xml.
Additionally, prettify the doc, make it more structural.

> Document missing S3A and S3 properties
> --
>
> Key: HADOOP-12982
> URL: https://issues.apache.org/jira/browse/HADOOP-12982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12982.001.patch
>
>
> * S3: 
> ** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
> {{fs.s3.block.size}}  not in the documentation
> ** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
> {{fs.s3.sleepTimeSeconds}} are also used by S3N.
> * S3A:
> ** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
> missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218929#comment-15218929
 ] 

Hudson commented on HADOOP-12886:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9529/])
HADOOP-12886. Exclude weak ciphers in SSLFactory through ssl-server.xml. 
(zezhang: rev e4fc609d5d3739b7809057954c5233cfd1d1117b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestSSLFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java


> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Fix For: 2.8.0
>
> Attachments: HADOOP-12886.001.patch, HADOOP-12886.002.patch, 
> HADOOP-12886.003.patch, HADOOP-12886.004.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12968) Make TestIPC.TestServer implement AutoCloseable

2016-03-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12968:
---
Status: Patch Available  (was: Open)

> Make TestIPC.TestServer implement AutoCloseable
> ---
>
> Key: HADOOP-12968
> URL: https://issues.apache.org/jira/browse/HADOOP-12968
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, test
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose making TestIPC.TestServer implement AutoCloseable to 
> benefit from try-with-resources regarding test cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-30 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218893#comment-15218893
 ] 

Xiaobing Zhou commented on HADOOP-12969:


I posted a simple patch for review. They are already marked as @Evoling.

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12969:
---
Attachment: HADOOP-12969.000..patch

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12969:
---
Attachment: (was: HADOOP-12969.000..patch)

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12886) Exclude weak ciphers in SSLFactory through ssl-server.xml

2016-03-30 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12886:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Thanks Wei-Chiu. +1 on the v4 patch. I just committed the change to trunk, 
branch-2, and branch-2.8.

> Exclude weak ciphers in SSLFactory through ssl-server.xml
> -
>
> Key: HADOOP-12886
> URL: https://issues.apache.org/jira/browse/HADOOP-12886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Netty, datanode, security
> Fix For: 2.8.0
>
> Attachments: HADOOP-12886.001.patch, HADOOP-12886.002.patch, 
> HADOOP-12886.003.patch, HADOOP-12886.004.patch
>
>
> HADOOP-12668 added support to exclude weak ciphers in HttpServer2, which is 
> good for name nodes. But data node web UI is based on Netty, which uses 
> SSLFactory and does not read ssl-server.xml to exclude the ciphers.
> We should also add the same support for Netty for consistency.
> I will attach a full patch later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12969) Mark IPC.Client and IPC.Server as @Public, @Evolving

2016-03-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12969:
---
Attachment: HADOOP-12969.000..patch

> Mark IPC.Client and IPC.Server as @Public, @Evolving
> 
>
> Key: HADOOP-12969
> URL: https://issues.apache.org/jira/browse/HADOOP-12969
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HADOOP-12969.000..patch
>
>
> Per the discussion in 
> [HADOOP-12909|https://issues.apache.org/jira/browse/HADOOP-12909?focusedCommentId=15211745=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15211745],
>  this is to propose marking IPC.Client and IPC.Server as @Public, @Evolving 
> as a result of HADOOP-12909



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Component/s: documentation

> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12982) Document missing S3A and S3 properties

2016-03-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12982:


 Summary: Document missing S3A and S3 properties
 Key: HADOOP-12982
 URL: https://issues.apache.org/jira/browse/HADOOP-12982
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, tools
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


* S3: 
** {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, {{fs.s3.sleepTimeSeconds}}, 
{{fs.s3.block.size}}  not in the documentation
** Note that {{fs.s3.buffer.dir}}, {{fs.s3.maxRetries}}, 
{{fs.s3.sleepTimeSeconds}} are also used by S3N.
* S3A:
** {{fs.s3a.server-side-encryption-algorithm}} and {{fs.s3a.block.size}} are 
missing in core-default.xml and the documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-30 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218874#comment-15218874
 ] 

Xiaobing Zhou commented on HADOOP-12909:


Thanks [~szetszwo], v007 removed @Unstable and changed it to lowercase one.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-30 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12909:
---
Attachment: HADOOP-12909-HDFS-9924.007.patch

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch, 
> HADOOP-12909-HDFS-9924.007.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v7.patch

* Remove exception re-throwing.
* Add logging when we fail to create the asked for class.
* Clean up handling of InterruptedException

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218867#comment-15218867
 ] 

Hadoop QA commented on HADOOP-12950:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 2 unchanged - 5 fixed = 2 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 0s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 12s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796149/HADOOP-12950.03.patch 
|
| JIRA Issue | HADOOP-12950 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e89d8ad227ee 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 

[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218858#comment-15218858
 ] 

Hadoop QA commented on HADOOP-11393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 7s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 5m 
20s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-tools/hadoop-datajoin in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 12m 
48s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 16m 
18s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
54s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
15s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 32s 
{color} | {color:red} root: patch generated 4 new + 171 unchanged - 8 fixed = 
175 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 5m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 12m 
46s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | 

[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Description: 
It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
used. Those properties are prefixed by {{s3native}}, and the current s3native 
properties are all prefixed by {{fs.s3n}}, so this is likely not used 
currently. Additionally, core-default.xml has the description of these unused 
properties:
{noformat}



  s3native.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3native.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size



  s3native.client-write-packet-size
  65536
  Packet size for clients to write



  s3native.blocksize
  67108864
  Block size



  s3native.replication
  3
  Replication factor

{noformat}
I think they should be removed (or deprecated) to avoid confusion if these 
properties are defunct.

  was:
It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
used. The current s3native properties are all prefixed by {{fs.s3n}}, so this 
is likely not used currently. Additionally, core-default.xml has the 
description of these unused properties:
{noformat}



  s3native.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3native.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size



  s3native.client-write-packet-size
  65536
  Packet size for clients to write



  s3native.blocksize
  67108864
  Block size



  s3native.replication
  3
  Replication factor

{noformat}
I think they should be removed (or deprecated) to avoid confusion if these 
properties are defunct.


> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. Those properties are prefixed by {{s3native}}, and the current s3native 
> properties are all prefixed by {{fs.s3n}}, so this is likely not used 
> currently. Additionally, core-default.xml has the description of these unused 
> properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12981:
-
Description: 
It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
used. The current s3native properties are all prefixed by {{fs.s3n}}, so this 
is likely not used currently. Additionally, core-default.xml has the 
description of these unused properties:
{noformat}



  s3native.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3native.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size



  s3native.client-write-packet-size
  65536
  Packet size for clients to write



  s3native.blocksize
  67108864
  Block size



  s3native.replication
  3
  Replication factor

{noformat}
I think they should be removed (or deprecated) to avoid confusion if these 
properties are defunct.

  was:
It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
used. Additionally, core-default.xml has the description of these unused 
properties:
{noformat}



  s3native.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3native.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size



  s3native.client-write-packet-size
  65536
  Packet size for clients to write



  s3native.blocksize
  67108864
  Block size



  s3native.replication
  3
  Replication factor

{noformat}
I think they should be removed (or deprecated) to avoid confusion if these 
properties are defunct.


> Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and 
> core-default.xml
> ---
>
> Key: HADOOP-12981
> URL: https://issues.apache.org/jira/browse/HADOOP-12981
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>  Labels: aws
>
> It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
> used. The current s3native properties are all prefixed by {{fs.s3n}}, so this 
> is likely not used currently. Additionally, core-default.xml has the 
> description of these unused properties:
> {noformat}
> 
> 
>   s3native.stream-buffer-size
>   4096
>   The size of buffer to stream files.
>   The size of this buffer should probably be a multiple of hardware
>   page size (4096 on Intel x86), and it determines how much data is
>   buffered during read and write operations.
> 
> 
>   s3native.bytes-per-checksum
>   512
>   The number of bytes per checksum.  Must not be larger than
>   s3native.stream-buffer-size
> 
> 
>   s3native.client-write-packet-size
>   65536
>   Packet size for clients to write
> 
> 
>   s3native.blocksize
>   67108864
>   Block size
> 
> 
>   s3native.replication
>   3
>   Replication factor
> 
> {noformat}
> I think they should be removed (or deprecated) to avoid confusion if these 
> properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12981) Remove/deprecate s3native properties from S3NativeFileSystemConfigKeys and core-default.xml

2016-03-30 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12981:


 Summary: Remove/deprecate s3native properties from 
S3NativeFileSystemConfigKeys and core-default.xml
 Key: HADOOP-12981
 URL: https://issues.apache.org/jira/browse/HADOOP-12981
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.0.0
Reporter: Wei-Chiu Chuang
Priority: Minor


It seems all properties defined in {{S3NativeFileSystemConfigKeys}} are not 
used. Additionally, core-default.xml has the description of these unused 
properties:
{noformat}



  s3native.stream-buffer-size
  4096
  The size of buffer to stream files.
  The size of this buffer should probably be a multiple of hardware
  page size (4096 on Intel x86), and it determines how much data is
  buffered during read and write operations.



  s3native.bytes-per-checksum
  512
  The number of bytes per checksum.  Must not be larger than
  s3native.stream-buffer-size



  s3native.client-write-packet-size
  65536
  Packet size for clients to write



  s3native.blocksize
  67108864
  Block size



  s3native.replication
  3
  Replication factor

{noformat}
I think they should be removed (or deprecated) to avoid confusion if these 
properties are defunct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12972) Lz4Compressor#getLibraryName returns the wrong version number

2016-03-30 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218788#comment-15218788
 ] 

Colin Patrick McCabe commented on HADOOP-12972:
---

Hi [~jzhuge],

The output of the "lz4" command on Ubuntu isn't relevant.  Hadoop uses its own 
bundled version of the lz4 source code, stored in 
{{./hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c}}.
  I don't even have the {{lz4}} command installed on my system, and I don't 
know what the {{lz4cli.c}} source code file you are referencing is (it sounds 
like something in a 3rd party package that Hadoop doesn't use.)

You can see the way that the version is calculated here in 
{{./hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.h}}
{code}
#define LZ4_VERSION_MAJOR1/* for major interface/format changes  */
#define LZ4_VERSION_MINOR3/* for minor interface/format changes  */
#define LZ4_VERSION_RELEASE  1/* for tweaks, bug-fixes, or development */
#define LZ4_VERSION_NUMBER (LZ4_VERSION_MAJOR *100*100 + LZ4_VERSION_MINOR *100 
+ LZ4_VERSION_RELEASE)
{code}

So a version number of 10301 corresponds to major = 1, minor = 3, release = 1.  
It seems fairly easy to read.  If you want to improve this so that it prints 
out things in dot-dot-dot format, that would be a useful patch.  But it doesn't 
seem strictly necessary.

> Lz4Compressor#getLibraryName returns the wrong version number
> -
>
> Key: HADOOP-12972
> URL: https://issues.apache.org/jira/browse/HADOOP-12972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12972.001.patch
>
>
> HADOOP-11184 updated lz4 to "r123", but {{hadoop checknative -a}} still 
> prints "revision:99".
> {code}
> $ hadoop checknative -a
> 16/03/29 11:42:40 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
> native-bzip2 library system-native
> 16/03/29 11:42:40 INFO zlib.ZlibFactory: Successfully loaded & initialized 
> native-zlib library
> Native library checking:
> hadoop:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218785#comment-15218785
 ] 

Gary Helmling commented on HADOOP-12973:


In {{getInstance()}}, when invoking the configured class constructor, I think 
it would be good to log a message in case one of the possible exceptions is 
thrown.  Otherwise there's no feedback to the user that the expected 
configuration failed. Otherwise, this looks good.

Agreed that the exception handling is wonky.  Logging a warn on run failures 
seems better, but it would still be good to have a way for callers to 
differentiate the states of actually 0 used vs. a failure to run.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.07.patch

The patch applies to my local trunk branch without problem. Not sure why 
Jenkins failed. Reattach..

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch, HADOOP-12916.07.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218747#comment-15218747
 ] 

Elliott Clark commented on HADOOP-12973:


bq.Hi Elliott Clark I wonder what was the concern about replacing the call to 
DU?
My understanding is that replacing du with df means that we will lose the 
ability to tell how large the distributed cache is. So for people who rely on 
MR/Yarn a lot then it's possible to get into a bad situation. I haven't really 
looked into that too much. My primary use case is around HBase where a du 
running and walking all the inodes causes a significant IO spike and latency 
outliers.

bq.With regard to the issues described in HDFS-9923, do you think that your 
solution also fixes the inconsistent exception handling of DU?
I didn't change the exception handling at all, but it is weird and changing it 
to just log a warn seems like a good idea to me.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12950:

Attachment: HADOOP-12950.03.patch

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch, HADOOP-12950.03.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218718#comment-15218718
 ] 

Wei-Chiu Chuang commented on HADOOP-12973:
--

Hi [~eclark] I wonder what was the concern about replacing the call to DU? I'm 
sure it was talked about in some jiras, but I did not have the chance to join 
that discussion. With regard to the issues described in HDFS-9923, do you think 
that your solution also fixes the inconsistent exception handling of DU?

Thanks!

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218699#comment-15218699
 ] 

Xiaoyu Yao commented on HADOOP-12950:
-

[~jingzhao], the SortedSet/SortedMap uses the overwritten Comparator (based on 
Priority order not the hash of the Runnable) as an optimization (binary vs 
linear) to search for the hook upon ShutdownHookManger#hasShutdownHook and 
ShutdownHookManager#removeShutdownHook. I plan to change back to the HashSet 
without overwite the Comparator and returns a separate sorted list, what do you 
think?

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v6.patch

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218676#comment-15218676
 ] 

Elliott Clark commented on HADOOP-12973:


Added a test and cleaned up more checkstyle.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v5.patch

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218584#comment-15218584
 ] 

Hadoop QA commented on HADOOP-12973:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s 
{color} | {color:red} root: patch generated 5 new + 21 unchanged - 1 fixed = 26 
total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 182m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit 

[jira] [Commented] (HADOOP-12950) ShutdownHookManager should have a timeout for each of the Registered shutdown hook

2016-03-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218566#comment-15218566
 ] 

Jing Zhao commented on HADOOP-12950:


The failure on TestFileContextDeleteOnExit looks related to the patch. [~xyao], 
could you please check it?

> ShutdownHookManager should have a timeout for each of the Registered shutdown 
> hook
> --
>
> Key: HADOOP-12950
> URL: https://issues.apache.org/jira/browse/HADOOP-12950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12950.00.patch, HADOOP-12950.01.patch, 
> HADOOP-12950.02.patch
>
>
> HADOOP-8325 added a ShutdownHookManager to be used by different components 
> instead of the JVM shutdownhook. For each of the shutdown hook registered, we 
> currently don't have an upper bound for its execution time. We have seen 
> namenode failed to shutdown completely (waiting for shutdown hook to finish 
> after failover) for a long period of time, which breaks the namenode high 
> availability scenarios. This ticket is opened to allow specifying a timeout 
> value for the registered shutdown hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218505#comment-15218505
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 11s {color} 
| {color:red} HADOOP-12916 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12916 |
| GITHUB PR | https://github.com/apache/hadoop/pull/86 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8970/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.06.patch

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: (was: HADOOP-12916.06.patch)

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11858) [JDK8] Set minimum version of Hadoop 3 to JDK 8

2016-03-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218354#comment-15218354
 ] 

Andrew Wang commented on HADOOP-11858:
--

I'm still +1 for bumping.

> [JDK8] Set minimum version of Hadoop 3 to JDK 8
> ---
>
> Key: HADOOP-11858
> URL: https://issues.apache.org/jira/browse/HADOOP-11858
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-11858.001.patch, HADOOP-11858.002.patch
>
>
> Set minimum version of trunk to JDK 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218351#comment-15218351
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-12916 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12916 |
| GITHUB PR | https://github.com/apache/hadoop/pull/86 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8968/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: (was: HADOOP-12916.06.patch)

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.06.patch

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12980) Document RPC scheduler/callqueue configuration keys

2016-03-30 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-12980:
---

 Summary: Document RPC scheduler/callqueue configuration keys
 Key: HADOOP-12980
 URL: https://issues.apache.org/jira/browse/HADOOP-12980
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to document RPC scheduler, callqueue and handler related 
keys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218317#comment-15218317
 ] 

Hadoop QA commented on HADOOP-12916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 11s {color} 
| {color:red} HADOOP-12916 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12916 |
| GITHUB PR | https://github.com/apache/hadoop/pull/86 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8967/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12916:

Attachment: HADOOP-12916.06.patch

Thanks [~szetszwo] for the review. I've updated the patch based on your 
suggestion. Also rebase the patch to trunk with additional unit test added for 
the scheduler constructor exception handling. 

> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch, HADOOP-12916.06.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12975) Add jitter to DU's thread

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12975:
---
Attachment: HADOOP-12975v1.patch

Rebased patch.

> Add jitter to DU's thread
> -
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v3.patch

Getting checkstyle better.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-03-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218130#comment-15218130
 ] 

Sean Busbey commented on HADOOP-9613:
-

I manually started a precommit run. Toggling the patch available status (or 
otherwise updating the issue without changing the most recent attachment) won't 
cause a new run because the Precommit Admin job that coordinates grabbing stuff 
from JIRA keeps track of patches it has sent for work before and skips them.

To manually re-run, someone with a builds.apache login (which can be any 
committer given PMC sign-off) goes to, e.g. [the HADOOP precommit 
job|https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HADOOP-Build/]
 and then enters the numeric part of the JIRA id.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218121#comment-15218121
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 13s {color} 
| {color:red} HADOOP-9613 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-9613 |
| GITHUB PR | https://github.com/apache/hadoop/pull/76 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8965/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12577) Bump up commons-collections version to 3.2.2 to address a security flaw

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218111#comment-15218111
 ] 

Steve Loughran commented on HADOOP-12577:
-

OK

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: HADOOP-12577
> URL: https://issues.apache.org/jira/browse/HADOOP-12577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.7.2, 2.6.3
>
> Attachments: HADOOP-12577.001.patch
>
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12979) IOE in S3a: ${hadoop.tmp.dir}/s3a not configured

2016-03-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12979:

Description: 
Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 saying 
{code}
${hadoop.tmp.dir}/s3a not configured.
{code}
That's correct: there is no configuration option on the conf called 
{code}
${hadoop.tmp.dir}/s3a
{code}
There may be one called {{hadoop.tmp.dir}}, however.

Essentially s3a is sending the wrong config option down, if it can't find 
{{fs.s3a.buffer.dir}}

  was:
Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 saying 
"${hadoop.tmp.dir}/s3a not configured".

That's correct: there is no configuration option on the conf called 
"${hadoop.tmp.dir}/s3a ". There may be one called {{hadoop.tmp.dir}}, however.

Essentially s3a is sending the wrong config option down, if it can't find 
{{fs.s3a.buffer.dir}}


> IOE in S3a:  ${hadoop.tmp.dir}/s3a not configured
> -
>
> Key: HADOOP-12979
> URL: https://issues.apache.org/jira/browse/HADOOP-12979
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 
> saying 
> {code}
> ${hadoop.tmp.dir}/s3a not configured.
> {code}
> That's correct: there is no configuration option on the conf called 
> {code}
> ${hadoop.tmp.dir}/s3a
> {code}
> There may be one called {{hadoop.tmp.dir}}, however.
> Essentially s3a is sending the wrong config option down, if it can't find 
> {{fs.s3a.buffer.dir}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12979) IOE in S3a: ${hadoop.tmp.dir}/s3a not configured

2016-03-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12979:
---

 Summary: IOE in S3a:  ${hadoop.tmp.dir}/s3a not configured
 Key: HADOOP-12979
 URL: https://issues.apache.org/jira/browse/HADOOP-12979
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Steve Loughran


Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 saying 
"${hadoop.tmp.dir}/s3a not configured".

That's correct: there is no configuration option on the conf called 
"${hadoop.tmp.dir}/s3a ". There may be one called {{hadoop.tmp.dir}}, however.

Essentially s3a is sending the wrong config option down, if it can't find 
{{fs.s3a.buffer.dir}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12979) IOE in S3a: ${hadoop.tmp.dir}/s3a not configured

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218072#comment-15218072
 ] 

Steve Loughran commented on HADOOP-12979:
-

The problem is the branch take if there's no s3 buffer dir defined. the else 
clause is broken; it's looking for a config option which isn't there.
{code}
if (conf.get(BUFFER_DIR, null) != null) {
  lDirAlloc = new LocalDirAllocator(BUFFER_DIR);
} else {
  lDirAlloc = new LocalDirAllocator("${hadoop.tmp.dir}/s3a");   // HERE
}
{code}

The fix should be to set {{BUFFER_DIR}} to the full path desired, create the 
{{LocalDirAllocator(BUFFER_DIR)}} from the (possibly enhanced) config

> IOE in S3a:  ${hadoop.tmp.dir}/s3a not configured
> -
>
> Key: HADOOP-12979
> URL: https://issues.apache.org/jira/browse/HADOOP-12979
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 
> saying "${hadoop.tmp.dir}/s3a not configured".
> That's correct: there is no configuration option on the conf called 
> "${hadoop.tmp.dir}/s3a ". There may be one called {{hadoop.tmp.dir}}, however.
> Essentially s3a is sending the wrong config option down, if it can't find 
> {{fs.s3a.buffer.dir}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12979) IOE in S3a: ${hadoop.tmp.dir}/s3a not configured

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218067#comment-15218067
 ] 

Steve Loughran commented on HADOOP-12979:
-

full stack
{code}

Driver stacktrace:
  at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1457)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1445)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1444)
  at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1444)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
  at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
  at scala.Option.foreach(Option.scala:257)
  at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:809)
  ...
  Cause: java.io.IOException: ${hadoop.tmp.dir}/s3a not configured
  at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:269)
  at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:349)
  at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:421)
  at 
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:198)
  at org.apache.hadoop.fs.s3a.S3AOutputStream.(S3AOutputStream.java:91)
  at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:488)
  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:814)
  at 
org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:123)
  at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:90)
{code}

> IOE in S3a:  ${hadoop.tmp.dir}/s3a not configured
> -
>
> Key: HADOOP-12979
> URL: https://issues.apache.org/jira/browse/HADOOP-12979
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Running some spark s3a tests trigger an NPE in Hadoop <=2/7l IOE in 2.8 
> saying "${hadoop.tmp.dir}/s3a not configured".
> That's correct: there is no configuration option on the conf called 
> "${hadoop.tmp.dir}/s3a ". There may be one called {{hadoop.tmp.dir}}, however.
> Essentially s3a is sending the wrong config option down, if it can't find 
> {{fs.s3a.buffer.dir}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-30 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218014#comment-15218014
 ] 

Masatake Iwasaki commented on HADOOP-12672:
---

Thanks again, [~arpitagarwal]. I will wait further comments from other 
reviewers for a day before committing this.

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch, HADOOP-12672.005.patch, 
> HADOOP-12672.006.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12909) Change ipc.Client to support asynchronous calls

2016-03-30 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217948#comment-15217948
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12909:
--

- The parameter type in setAsynchronousMode(Boolean async) should be the lower 
case boolean.

- getRpcResponse(..) is private.  Let's remove @Unstable.

> Change ipc.Client to support asynchronous calls
> ---
>
> Key: HADOOP-12909
> URL: https://issues.apache.org/jira/browse/HADOOP-12909
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12909-HDFS-9924.000.patch, 
> HADOOP-12909-HDFS-9924.001.patch, HADOOP-12909-HDFS-9924.002.patch, 
> HADOOP-12909-HDFS-9924.003.patch, HADOOP-12909-HDFS-9924.004.patch, 
> HADOOP-12909-HDFS-9924.005.patch, HADOOP-12909-HDFS-9924.006.patch
>
>
> In ipc.Client, the underlying mechanism is already supporting asynchronous 
> calls -- the calls shares a connection, the call requests are sent using a 
> thread pool and the responses can be out of order.  Indeed, synchronous call 
> is implemented by invoking wait() in the caller thread in order to wait for 
> the server response.
> In this JIRA, we change ipc.Client to support asynchronous mode.  In 
> asynchronous mode, it return once the request has been sent out but not wait 
> for the response from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12916) Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff

2016-03-30 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217939#comment-15217939
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12916:
--

{code}
+double decayedAvgRespTime = (responseTimeAvgInLastWindow.get(i) > 0.0) 
?
+decayFactor * responseTimeAvgInLastWindow.get(i) +
+(1 - decayFactor) * averageResponseTime : averageResponseTime;
{code}
For the decayed case, should the formula be 
{{decayFactor*responseTimeAvgInLastWindow.get( i) + averageResponseTime}}, i.e. 
no {{(1 - decayFactor)}} in the second term?

BTW, the if-statement can be rewritten as below to make it shorter.
{code}
  final double lastAvg = responseTimeAvgInLastWindow.get(i);
  if (enableDecay && lastAvg > 0) {
final double decayed =  decayFactor * lastAvg + averageResponseTime;
responseTimeAvgInLastWindow.set(i, decayed);
  } else {
responseTimeAvgInLastWindow.set(i, averageResponseTime);
  }
{code}


> Allow different Hadoop IPC Call Queue throttling policies with FCQ/BackOff
> --
>
> Key: HADOOP-12916
> URL: https://issues.apache.org/jira/browse/HADOOP-12916
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12916.00.patch, HADOOP-12916.01.patch, 
> HADOOP-12916.02.patch, HADOOP-12916.03.patch, HADOOP-12916.04.patch, 
> HADOOP-12916.05.patch
>
>
> Currently back off policy from HADOOP-10597 is hard coded to base on whether 
> call queue is full. This ticket is open to allow flexible back off policies 
> such as moving average of response time in RPC calls of different priorities. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12978) move s3a to slf4j logging

2016-03-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12978:
---

 Summary: move s3a to slf4j logging
 Key: HADOOP-12978
 URL: https://issues.apache.org/jira/browse/HADOOP-12978
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Priority: Minor


move s3a over to the SLF4J APIS.

The other object stores need this too...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12977) s3a ignores delete("/", true)

2016-03-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12977:
---

 Summary: s3a ignores delete("/", true)
 Key: HADOOP-12977
 URL: https://issues.apache.org/jira/browse/HADOOP-12977
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran


if you try to delete the root directory on s3a, you get politely but firmly 
told you can't

{code}
2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
(S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
{code}

The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
underneath, while preserving the root dir itself".

# s3a needs to support this.
# this skipped through the FS contract tests in 
{{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12977) s3a ignores delete("/", true)

2016-03-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12977:

Priority: Minor  (was: Major)

> s3a ignores delete("/", true)
> -
>
> Key: HADOOP-12977
> URL: https://issues.apache.org/jira/browse/HADOOP-12977
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> if you try to delete the root directory on s3a, you get politely but firmly 
> told you can't
> {code}
> 2016-03-30 12:01:44,924 INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:delete(638)) - s3a cannot delete the root directory
> {code}
> The semantics of {{rm -rf "/"}} are defined, they are "delete everything 
> underneath, while preserving the root dir itself".
> # s3a needs to support this.
> # this skipped through the FS contract tests in 
> {{AbstractContractRootDirectoryTest}}; the option of whether deleting / works 
> or not should be made configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12976) s3a toString to be meaningful in logs

2016-03-30 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12976:
---

 Summary: s3a toString to be meaningful in logs
 Key: HADOOP-12976
 URL: https://issues.apache.org/jira/browse/HADOOP-12976
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Trivial


today's toString value is just the object ref; better to include the URL of the 
FS

Example:
{code}
Cleaning filesystem org.apache.hadoop.fs.s3a.S3AFileSystem@1f069dc1 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-03-30 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217840#comment-15217840
 ] 

Kai Zheng commented on HADOOP-12911:


bq. This could be a good time to make MiniKDC a subclass of AbstractService, 
though it may (will?) break external users. Perhaps we could have a MiniKDC 
service, which the existing MiniKDC code instantiated on its existing lifecycle.
Sounds good to have a MiniKDC service that extends AbstractService, in addition 
to the MiniKDC construct. The both are valid for respective environments. 
Should we worry about the breaking of external users, if we target this for 
Hadoop 3.0, marked as incompatible changes? If acceptable, it will allow to 
clean up all the unnecessary configurations (they're exposed publicly and may 
be used) and interfaces, making it more easy to use.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12577) Bump up commons-collections version to 3.2.2 to address a security flaw

2016-03-30 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217809#comment-15217809
 ] 

Junping Du commented on HADOOP-12577:
-

[~ste...@apache.org], this is already included in 2.6 since 2.6.3.

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: HADOOP-12577
> URL: https://issues.apache.org/jira/browse/HADOOP-12577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.7.2, 2.6.3
>
> Attachments: HADOOP-12577.001.patch
>
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12577) Bump up commons-collections version to 3.2.2 to address a security flaw

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217782#comment-15217782
 ] 

Steve Loughran commented on HADOOP-12577:
-

backport this to 2.6?

> Bump up commons-collections version to 3.2.2 to address a security flaw
> ---
>
> Key: HADOOP-12577
> URL: https://issues.apache.org/jira/browse/HADOOP-12577
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 2.7.2, 2.6.3
>
> Attachments: HADOOP-12577.001.patch
>
>
> Update commons-collections from 3.2.1 to 3.2.2 because of a major security 
> vulnerability. There are many other open source projects use 
> commons-collections and are also affected.
> Please see 
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
>  for the discovery of the vulnerability.
> https://issues.apache.org/jira/browse/COLLECTIONS-580 has the discussion 
> thread of the fix.
> https://blogs.apache.org/foundation/entry/apache_commons_statement_to_widespread
>  The ASF response to the security vulnerability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-03-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9613:
---
Status: Open  (was: Patch Available)

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217771#comment-15217771
 ] 

Steve Loughran commented on HADOOP-9613:


Lets rerun this

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-03-30 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9613:
---
Status: Patch Available  (was: Open)

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217768#comment-15217768
 ] 

Steve Loughran commented on HADOOP-11628:
-

Revisiting this: this isn't in 2.7... do we need to backport it?

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>  Labels: jdk8
> Fix For: 2.8.0
>
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11858) [JDK8] Set minimum version of Hadoop 3 to JDK 8

2016-03-30 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217761#comment-15217761
 ] 

Steve Loughran commented on HADOOP-11858:
-

Given its 2016, is it time to raise this topic on the dev@ list again?

> [JDK8] Set minimum version of Hadoop 3 to JDK 8
> ---
>
> Key: HADOOP-11858
> URL: https://issues.apache.org/jira/browse/HADOOP-11858
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-11858.001.patch, HADOOP-11858.002.patch
>
>
> Set minimum version of trunk to JDK 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12972) Lz4Compressor#getLibraryName returns the wrong version number

2016-03-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217620#comment-15217620
 ] 

John Zhuge commented on HADOOP-12972:
-

If we want to match Hadoop lz4 library version with OS lz4 command version, 
shall we make the following change instead of Patch 001?
{code}
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_getLibraryName(
  JNIEnv *env, jclass class
  ) {
  return (*env)->NewStringUTF(env, "r123");
}
{code}
If we take this approach, we must add comments reminding whoever upgrades LZ4 
revision to update this hardcoded value.

> Lz4Compressor#getLibraryName returns the wrong version number
> -
>
> Key: HADOOP-12972
> URL: https://issues.apache.org/jira/browse/HADOOP-12972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12972.001.patch
>
>
> HADOOP-11184 updated lz4 to "r123", but {{hadoop checknative -a}} still 
> prints "revision:99".
> {code}
> $ hadoop checknative -a
> 16/03/29 11:42:40 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
> native-bzip2 library system-native
> 16/03/29 11:42:40 INFO zlib.ZlibFactory: Successfully loaded & initialized 
> native-zlib library
> Native library checking:
> hadoop:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12972) Lz4Compressor#getLibraryName returns the wrong version number

2016-03-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217606#comment-15217606
 ] 

John Zhuge commented on HADOOP-12972:
-

Thanks [~cmccabe] for the fix. Here is the test output of the patch:
{code}
$ hadoop checknative
16/03/30 00:13:19 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
native-bzip2 library system-native
16/03/30 00:13:19 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop:  true 
/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.26/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:true /lib/x86_64-linux-gnu/libz.so.1
snappy:  false 
lz4: true revision:10301
bzip2:   true /lib/x86_64-linux-gnu/libbz2.so.1
openssl: true /usr/lib/x86_64-linux-gnu/libcrypto.so
{code}

Ubuntu 14.04 lz4 -v output:
{code}
$ lz4 -h
*** LZ4 Compression CLI 64-bits r114, by Yann Collet (Apr 14 2014) ***
Usage :
  lz4 [arg] [input] [output]
...
{code}

The {{hadoop checknative}} shows library version "revision:10301", while Linux 
{{lz4 -h}} shows cli version "r114". There is no way to match library version 
{{LZ4_VERSION_NUMBER in lz.h}} and cli version {{LZ4_VERSION in lz4cli.c}} 
without the source code if it is 1-1 mapping.

> Lz4Compressor#getLibraryName returns the wrong version number
> -
>
> Key: HADOOP-12972
> URL: https://issues.apache.org/jira/browse/HADOOP-12972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12972.001.patch
>
>
> HADOOP-11184 updated lz4 to "r123", but {{hadoop checknative -a}} still 
> prints "revision:99".
> {code}
> $ hadoop checknative -a
> 16/03/29 11:42:40 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
> native-bzip2 library system-native
> 16/03/29 11:42:40 INFO zlib.ZlibFactory: Successfully loaded & initialized 
> native-zlib library
> Native library checking:
> hadoop:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true 
> /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.1209/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12412) Concurrency in FileSystem$Cache is very broken

2016-03-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12412:
---
Target Version/s:   (was: 2.7.3)
Priority: Major  (was: Critical)

As Michael mentioned, reduced the priority and removed the target version.

> Concurrency in FileSystem$Cache is very broken
> --
>
> Key: HADOOP-12412
> URL: https://issues.apache.org/jira/browse/HADOOP-12412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Michael Harris
>Assignee: Michael Harris
> Attachments: HADOOP-12412.patch, HADOOP-12412.patch
>
>
> The FileSystem cache uses a mild amount of concurrency to protect the cache 
> itself, but does nothing to prevent multiple of the same filesystem from 
> being constructed and initialized simultaneously.  At best, this leads to 
> potentially expensive wasted work.  At worst, as is the case for Spark, it 
> can lead to deadlocks/livelocks, especially when the same configuration 
> object is passed into both calls.  This should be refactored to use a results 
> cache approach (reference Java Concurrency in Practice chapter 5 section 6 
> for an example of how to do this correctly), which will be both 
> higher-performance and safer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12970) Intermittent signature match failures in S3AFileSystem due connection closure

2016-03-30 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15217513#comment-15217513
 ] 

Harsh J commented on HADOOP-12970:
--

Just for posterity: I did get my upstream patch into aws-sdk-java: 
https://github.com/aws/aws-sdk-java/blob/1.10.65/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/internal/AbstractS3ResponseHandler.java#L58,
 its available in AWS Java SDK versions 1.10.65 onwards, so using that version 
in future (within hadoop-aws) will resolve the issue too.

> Intermittent signature match failures in S3AFileSystem due connection closure
> -
>
> Key: HADOOP-12970
> URL: https://issues.apache.org/jira/browse/HADOOP-12970
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Harsh J
> Attachments: HADOOP-12970.patch, HADOOP-12970.patch
>
>
> S3AFileSystem's use of {{ObjectMetadata#clone()}} method inside the 
> {{copyFile}} implementation may fail in circumstances where the connection 
> used for obtaining the metadata is closed by the server (i.e. response 
> carries a {{Connection: close}} header). Due to this header not being 
> stripped away when the {{ObjectMetadata}} is created, and due to us cloning 
> it for use in the next {{CopyObjectRequest}}, it causes the request to use 
> {{Connection: close}} headers as a part of itself.
> This causes signer related exceptions because the client now includes the 
> {{Connection}} header as part of the {{SignedHeaders}}, but the S3 server 
> does not receive the same value for it ({{Connection}} headers are likely 
> stripped away before the S3 Server tries to match signature hashes), causing 
> a failure like below:
> {code}
> 2016-03-29 19:59:30,120 DEBUG [s3a-transfer-shared--pool1-t35] 
> org.apache.http.wire: >> "Authorization: AWS4-HMAC-SHA256 
> Credential=XXX/20160329/eu-central-1/s3/aws4_request, 
> SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-acl;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-version-id,
>  Signature=MNOPQRSTUVWXYZ[\r][\n]"
> …
> com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we 
> calculated does not match the signature you provided. Check your key and 
> signing method. (Service: Amazon S3; Status Code: 403; Error Code: 
> SignatureDoesNotMatch; Request ID: ABC), S3 Extended Request ID: XYZ
> {code}
> This is intermittent because the S3 Server does not always add a 
> {{Connection: close}} directive in its response, but whenever we receive it 
> AND we clone it, the above exception would happen for the copy request. The 
> copy request is often used in the context of FileOutputCommitter, when a lot 
> of the MR attempt files on {{s3a://}} destination filesystem are to be moved 
> to their parent directories post-commit.
> I've also submitted a fix upstream with AWS Java SDK to strip out the 
> {{Connection}} headers when dealing with {{ObjectMetadata}}, which is pending 
> acceptance and release at: https://github.com/aws/aws-sdk-java/pull/669, but 
> until that release is available and can be used by us, we'll need to 
> workaround the clone approach by manually excluding the {{Connection}} header 
> (not straight-forward due to the {{metadata}} object being private with no 
> mutable access). We can remove such a change in future when there's a release 
> available with the upstream fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)