[jira] [Updated] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14295:
-
Target Version/s: 3.0.0-alpha3  (was: 3.0.0-alpha2)

> Authentication proxy filter may fail authorization because of getRemoteAddr
> ---
>
> Key: HADOOP-14295
> URL: https://issues.apache.org/jira/browse/HADOOP-14295
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Critical
> Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, 
> HADOOP-14295.003.patch, HADOOP-14295.004.patch
>
>
> When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy 
> (Knox) would get an Authorization failure and it hosts would should as 
> 127.0.0.1 even though Knox wasn't in local host to Datanode, error message:
> {quote}
> "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter 
> (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify 
> proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1"
> {quote}
> We were able to figure out that Datanode have Jetty listening on localhost 
> and that Netty is used to server request to DataNode, this was a measure to 
> improve performance because of Netty Async NIO design.
> I propose to add a check for x-forwarded-for header since proxys usually 
> inject that header before we do a getRemoteAddr



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14386) Make trunk work with Guava 11.0.2 again

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14386:
-
Status: Patch Available  (was: Open)

> Make trunk work with Guava 11.0.2 again
> ---
>
> Key: HADOOP-14386
> URL: https://issues.apache.org/jira/browse/HADOOP-14386
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
> Attachments: HADOOP-14386.001.patch
>
>
> As an alternative to reverting or shading HADOOP-10101 (the upgrade of Guava 
> from 11.0.2 to 21.0), HADOOP-14380 makes the Guava version configurable. 
> However, it still doesn't compile with Guava 11.0.2, since HADOOP-10101 chose 
> to use the moved Guava classes rather than replacing them with alternatives.
> This JIRA aims to make Hadoop compatible with Guava 11.0.2 as well as 21.0 by 
> replacing usage of these moved Guava classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14386) Make trunk work with Guava 11.0.2 again

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14386:
-
Attachment: HADOOP-14386.001.patch

Here's a WIP patch. We also need HADOOP-14382.

The QJM tests are broken by this change, they depend on a direct 
executorservice to work. I took a quick look at the Guava implementation and 
it's non-trivial, though it is possible to copy-paste it in.

If someone else wants to run with this patch, please be my guest.

> Make trunk work with Guava 11.0.2 again
> ---
>
> Key: HADOOP-14386
> URL: https://issues.apache.org/jira/browse/HADOOP-14386
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
> Attachments: HADOOP-14386.001.patch
>
>
> As an alternative to reverting or shading HADOOP-10101 (the upgrade of Guava 
> from 11.0.2 to 21.0), HADOOP-14380 makes the Guava version configurable. 
> However, it still doesn't compile with Guava 11.0.2, since HADOOP-10101 chose 
> to use the moved Guava classes rather than replacing them with alternatives.
> This JIRA aims to make Hadoop compatible with Guava 11.0.2 as well as 21.0 by 
> replacing usage of these moved Guava classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14386) Make trunk work with Guava 11.0.2 again

2017-05-04 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14386:


 Summary: Make trunk work with Guava 11.0.2 again
 Key: HADOOP-14386
 URL: https://issues.apache.org/jira/browse/HADOOP-14386
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang


As an alternative to reverting or shading HADOOP-10101 (the upgrade of Guava 
from 11.0.2 to 21.0), HADOOP-14380 makes the Guava version configurable. 
However, it still doesn't compile with Guava 11.0.2, since HADOOP-10101 chose 
to use the moved Guava classes rather than replacing them with alternatives.

This JIRA aims to make Hadoop compatible with Guava 11.0.2 as well as 21.0 by 
replacing usage of these moved Guava classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14364) refresh changelog/release notes with newer Apache Yetus build

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997799#comment-15997799
 ] 

Hadoop QA commented on HADOOP-14364:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} @author {color} | {color:red}  0m  
0s{color} | {color:red} The patch appears to contain 2 @author tags which the 
community has agreed to not allow in code contributions. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  5m 
51s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 44 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m  0s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14364 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866528/HADOOP-14364.00.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 741d02a4d704 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3082552 |
| Default Java | 1.8.0_131 |
| @author | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12244/artifact/patchprocess/author-tags.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12244/artifact/patchprocess/patch-mvnsite-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12244/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12244/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12244/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.

[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997757#comment-15997757
 ] 

Anu Engineer commented on HADOOP-14384:
---

[~eddyxu] Thanks for the patch. Just so that we are all in the loop can you 
please share your thoughts on why you would like to make this private ? 

Here is what I am trying to understand:
* Why make this private if we are not planning to ship this ? 
* Given the comments on the original JIRA, I would presume that this is going 
to be reworked to an extent ? Are you concerned  that someone might use this 
accidently ? if that is the concern, I would suggest that we revert this 
instead of making this a private function. If it is not ready, it is not ready. 
Putting a Band Aid is not going to help much.
 

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-04 Thread lixinglong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixinglong updated HADOOP-14379:

Attachment: HADOOP-14379.002.patch

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: lixinglong
> Attachments: HADOOP-14379.001.patch, HADOOP-14379.002.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:05 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 

[jira] [Updated] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-04 Thread lixinglong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixinglong updated HADOOP-14379:

Attachment: (was: HADOOP-14374.002.patch)

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: lixinglong
> Attachments: HADOOP-14379.001.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:05 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Use

[jira] [Updated] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-04 Thread lixinglong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lixinglong updated HADOOP-14379:

Attachment: HADOOP-14374.002.patch

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: lixinglong
> Attachments: HADOOP-14374.002.patch, HADOOP-14379.001.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:05 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 

[jira] [Commented] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-04 Thread lixinglong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997753#comment-15997753
 ] 

lixinglong commented on HADOOP-14379:
-

[~xkrogen] Thank you very much! Has been modified in accordance with the ideas 
you say.

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: lixinglong
> Attachments: HADOOP-14374.002.patch, HADOOP-14379.001.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:05 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> H

[jira] [Commented] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997729#comment-15997729
 ] 

Hadoop QA commented on HADOOP-14383:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
30s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  3s{color} | {color:orange} root: The patch generated 14 new + 0 unchanged - 
0 fixed = 14 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14383 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866522/HADOOP-14383.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 7d49564d0c50 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 308

[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997728#comment-15997728
 ] 

Hadoop QA commented on HADOOP-14384:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
33s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866525/HADOOP-14384.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d49e46de84e2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3082552 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12243/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12243/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12243/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12243/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce the visibility of {{

[jira] [Updated] (HADOOP-12173) NetworkTopology#add calls NetworkTopology#toString always

2017-05-04 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-12173:
-
  Labels: release-blocker  (was: )
Target Version/s: 2.7.4  (was: 2.7.2)

> NetworkTopology#add calls NetworkTopology#toString always
> -
>
> Key: HADOOP-12173
> URL: https://issues.apache.org/jira/browse/HADOOP-12173
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>  Labels: release-blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12173-v1.patch
>
>
> It always does a toString of the whole topology but this is not required when 
> there are no errors. This is adding a very big overhead to large clusters as 
> it's walking the whole tree every time we add a node to the cluster.
> HADOOP-10953 did some fix in that area but the errors is still there.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-05-04 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997699#comment-15997699
 ] 

Allen Wittenauer commented on HADOOP-13714:
---

FWIW:

I've been watching a thread in -dev with interest.  In it, someone has proposed 
putting a CLEARLY MARKED incompatible change into a patch release.  Not a 
single person has said anything other than "looks good!".  A bit ago, that 
issue was changed to target the micro release. 


> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-13714.WIP-001.patch
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14364) refresh changelog/release notes with newer Apache Yetus build

2017-05-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14364:
--
Status: Patch Available  (was: Open)

> refresh changelog/release notes with newer Apache Yetus build
> -
>
> Key: HADOOP-14364
> URL: https://issues.apache.org/jira/browse/HADOOP-14364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14364.00.patch
>
>
> A lot of fixes went into Apache Yetus 0.4.0 wrt releasedocs and how it's 
> output gets rendered with mvn site.  We should re-run releasedocs for all 
> releases and refresh the content to use the new formatting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14364) refresh changelog/release notes with newer Apache Yetus build

2017-05-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-14364:
-

Assignee: Allen Wittenauer

> refresh changelog/release notes with newer Apache Yetus build
> -
>
> Key: HADOOP-14364
> URL: https://issues.apache.org/jira/browse/HADOOP-14364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14364.00.patch
>
>
> A lot of fixes went into Apache Yetus 0.4.0 wrt releasedocs and how it's 
> output gets rendered with mvn site.  We should re-run releasedocs for all 
> releases and refresh the content to use the new formatting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14364) refresh changelog/release notes with newer Apache Yetus build

2017-05-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14364:
--
Attachment: HADOOP-14364.00.patch

-00:
* update with 0.5.0-SNAPSHOT
* Change parameters for releasedocmaker to not include the quotes
* update maven-site-plugin to 3.6, and doxia-1.8-SHAPSHOT so that the markdown 
renders correctly

> refresh changelog/release notes with newer Apache Yetus build
> -
>
> Key: HADOOP-14364
> URL: https://issues.apache.org/jira/browse/HADOOP-14364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Allen Wittenauer
> Attachments: HADOOP-14364.00.patch
>
>
> A lot of fixes went into Apache Yetus 0.4.0 wrt releasedocs and how it's 
> output gets rendered with mvn site.  We should re-run releasedocs for all 
> releases and refresh the content to use the new formatting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997695#comment-15997695
 ] 

Hadoop QA commented on HADOOP-14384:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14384 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866517/HADOOP-14384.00.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 508fb96fb918 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 07761af |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12241/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12241/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12241/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12241/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce the visibility of {{

[jira] [Commented] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997688#comment-15997688
 ] 

Junping Du commented on HADOOP-14207:
-

Awesome! Thanks Xiaoyu!

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14207:

Fix Version/s: 2.8.1

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12975) Add jitter to CachingGetSpaceUsed's thread

2017-05-04 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-12975:
-
  Labels: release-blocker  (was: )
Target Version/s: 2.8.0, 2.7.4  (was: 2.8.0)

> Add jitter to CachingGetSpaceUsed's thread
> --
>
> Key: HADOOP-12975
> URL: https://issues.apache.org/jira/browse/HADOOP-12975
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>  Labels: release-blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12975v0.patch, HADOOP-12975v1.patch, 
> HADOOP-12975v2.patch, HADOOP-12975v3.patch, HADOOP-12975v4.patch, 
> HADOOP-12975v5.patch, HADOOP-12975v6.patch
>
>
> Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike. We should add some 
> jitter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14384:
---
Attachment: HADOOP-14384.01.patch

Thanks for the suggestion, [~andrew.wang].

It was marked as {{Public}} as builder was intended to be public later as I 
understand. So the 00 patch was force the visibly through compiler only.

I agree that it might be better to later to increase the visibility later. 

Attached the patch to address annotations. 

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch, HADOOP-14384.01.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14356) Update CHANGES.txt to reflect all the changes in branch-2.7

2017-05-04 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-14356:
-
Labels: release-blocker  (was: )

> Update CHANGES.txt to reflect all the changes in branch-2.7
> ---
>
> Key: HADOOP-14356
> URL: https://issues.apache.org/jira/browse/HADOOP-14356
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-14356-002.patch, HADOOP-14356.patch
>
>
> Following jira's are not updated in {{CHANGES.txt}}
> HADOOP-14066,HDFS-11608,HADOOP-14293,HDFS-11628,YARN-6274,YARN-6152,HADOOP-13119,HDFS-10733,HADOOP-13958,HDFS-11280,YARN-6024



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14383:

Status: Patch Available  (was: Open)

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-14383.000.patch
>
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14383:

Attachment: HADOOP-14383.000.patch

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-14383.000.patch
>
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14383) Implement FileSystem that reads from HTTP / HTTPS endpoints

2017-05-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-14383:

Summary: Implement FileSystem that reads from HTTP / HTTPS endpoints  (was: 
Implement a FileSystem that reads from HTTP)

> Implement FileSystem that reads from HTTP / HTTPS endpoints
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14385) HttpExceptionUtils#validateResponse swallows exceptions

2017-05-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14385:
-
Attachment: HADOOP-14385.001.patch

Actually, it looks like the only thing needed is to reference the initial 
exception in the IOException construction.

> HttpExceptionUtils#validateResponse swallows exceptions
> ---
>
> Key: HADOOP-14385
> URL: https://issues.apache.org/jira/browse/HADOOP-14385
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HADOOP-14385.001.patch
>
>
> In the following code
> {code:title=HttpExceptionUtils#validateResponse}
> try {
> es = conn.getErrorStream();
> ObjectMapper mapper = new ObjectMapper();
> Map json = mapper.readValue(es, Map.class);
> json = (Map) json.get(ERROR_JSON);
> String exClass = (String) json.get(ERROR_CLASSNAME_JSON);
> String exMsg = (String) json.get(ERROR_MESSAGE_JSON);
> if (exClass != null) {
>   try {
> ClassLoader cl = HttpExceptionUtils.class.getClassLoader();
> Class klass = cl.loadClass(exClass);
> Constructor constr = klass.getConstructor(String.class);
> toThrow = (Exception) constr.newInstance(exMsg);
>   } catch (Exception ex) {
> toThrow = new IOException(String.format(
> "HTTP status [%d], exception [%s], message [%s] ",
> conn.getResponseCode(), exClass, exMsg));
>   }
> } else {
>   String msg = (exMsg != null) ? exMsg : conn.getResponseMessage();
>   toThrow = new IOException(String.format(
>   "HTTP status [%d], message [%s]", conn.getResponseCode(), msg));
> }
>   } catch (Exception ex) {
> toThrow = new IOException(String.format( <-- here
> "HTTP status [%d], message [%s]", conn.getResponseCode(),
> conn.getResponseMessage()));
>   }
> {code}
> If the an exception is thrown within the try block, the initial exception is 
> swallowed, and it doesn't help debugging.
> We had to cross reference this exception with the KMS server side to guess 
> what happened.
> IMHO the IOException thrown should also carry the initial exception. It 
> should also print exClass and exMsg. It probably failed to instantiate an 
> exception class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14385) HttpExceptionUtils#validateResponse swallows exceptions

2017-05-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14385:
-
Priority: Trivial  (was: Major)

> HttpExceptionUtils#validateResponse swallows exceptions
> ---
>
> Key: HADOOP-14385
> URL: https://issues.apache.org/jira/browse/HADOOP-14385
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>
> In the following code
> {code:title=HttpExceptionUtils#validateResponse}
> try {
> es = conn.getErrorStream();
> ObjectMapper mapper = new ObjectMapper();
> Map json = mapper.readValue(es, Map.class);
> json = (Map) json.get(ERROR_JSON);
> String exClass = (String) json.get(ERROR_CLASSNAME_JSON);
> String exMsg = (String) json.get(ERROR_MESSAGE_JSON);
> if (exClass != null) {
>   try {
> ClassLoader cl = HttpExceptionUtils.class.getClassLoader();
> Class klass = cl.loadClass(exClass);
> Constructor constr = klass.getConstructor(String.class);
> toThrow = (Exception) constr.newInstance(exMsg);
>   } catch (Exception ex) {
> toThrow = new IOException(String.format(
> "HTTP status [%d], exception [%s], message [%s] ",
> conn.getResponseCode(), exClass, exMsg));
>   }
> } else {
>   String msg = (exMsg != null) ? exMsg : conn.getResponseMessage();
>   toThrow = new IOException(String.format(
>   "HTTP status [%d], message [%s]", conn.getResponseCode(), msg));
> }
>   } catch (Exception ex) {
> toThrow = new IOException(String.format( <-- here
> "HTTP status [%d], message [%s]", conn.getResponseCode(),
> conn.getResponseMessage()));
>   }
> {code}
> If the an exception is thrown within the try block, the initial exception is 
> swallowed, and it doesn't help debugging.
> We had to cross reference this exception with the KMS server side to guess 
> what happened.
> IMHO the IOException thrown should also carry the initial exception. It 
> should also print exClass and exMsg. It probably failed to instantiate an 
> exception class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997643#comment-15997643
 ] 

Andrew Wang commented on HADOOP-14384:
--

Thanks for picking this up Eddy,

Why annotate FSDataOutputStreamBuilder as Public at all?

If you wanted to make the language stronger, besides making it protected we 
could also mark the FileSystem method as IA.Private.

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14384:
---
Status: Patch Available  (was: Open)

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14384:
---
Attachment: HADOOP-14384.00.patch

Reduce {{FileSystem#newFSOutputStreamBuilder}} visibility as a temporary fix.

> Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before 
> the API becomes stable
> --
>
> Key: HADOOP-14384
> URL: https://issues.apache.org/jira/browse/HADOOP-14384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HADOOP-14384.00.patch
>
>
> Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
> project to prevent it being used by end users or other projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14385) HttpExceptionUtils#validateResponse swallows exceptions

2017-05-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14385:
-
Summary: HttpExceptionUtils#validateResponse swallows exceptions  (was: 
HttpExceptionUtils#validateResponse hides exceptions)

> HttpExceptionUtils#validateResponse swallows exceptions
> ---
>
> Key: HADOOP-14385
> URL: https://issues.apache.org/jira/browse/HADOOP-14385
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In the following code
> {code:title=HttpExceptionUtils#validateResponse}
> try {
> es = conn.getErrorStream();
> ObjectMapper mapper = new ObjectMapper();
> Map json = mapper.readValue(es, Map.class);
> json = (Map) json.get(ERROR_JSON);
> String exClass = (String) json.get(ERROR_CLASSNAME_JSON);
> String exMsg = (String) json.get(ERROR_MESSAGE_JSON);
> if (exClass != null) {
>   try {
> ClassLoader cl = HttpExceptionUtils.class.getClassLoader();
> Class klass = cl.loadClass(exClass);
> Constructor constr = klass.getConstructor(String.class);
> toThrow = (Exception) constr.newInstance(exMsg);
>   } catch (Exception ex) {
> toThrow = new IOException(String.format(
> "HTTP status [%d], exception [%s], message [%s] ",
> conn.getResponseCode(), exClass, exMsg));
>   }
> } else {
>   String msg = (exMsg != null) ? exMsg : conn.getResponseMessage();
>   toThrow = new IOException(String.format(
>   "HTTP status [%d], message [%s]", conn.getResponseCode(), msg));
> }
>   } catch (Exception ex) {
> toThrow = new IOException(String.format( <-- here
> "HTTP status [%d], message [%s]", conn.getResponseCode(),
> conn.getResponseMessage()));
>   }
> {code}
> If the an exception is thrown within the try block, the initial exception is 
> swallowed, and it doesn't help debugging.
> We had to cross reference this exception with the KMS server side to guess 
> what happened.
> IMHO the IOException thrown should also carry the initial exception. It 
> should also print exClass and exMsg. It probably failed to instantiate an 
> exception class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14385) HttpExceptionUtils#validateResponse hides exceptions

2017-05-04 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14385:


 Summary: HttpExceptionUtils#validateResponse hides exceptions
 Key: HADOOP-14385
 URL: https://issues.apache.org/jira/browse/HADOOP-14385
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


In the following code
{code:title=HttpExceptionUtils#validateResponse}
try {
es = conn.getErrorStream();
ObjectMapper mapper = new ObjectMapper();
Map json = mapper.readValue(es, Map.class);
json = (Map) json.get(ERROR_JSON);
String exClass = (String) json.get(ERROR_CLASSNAME_JSON);
String exMsg = (String) json.get(ERROR_MESSAGE_JSON);
if (exClass != null) {
  try {
ClassLoader cl = HttpExceptionUtils.class.getClassLoader();
Class klass = cl.loadClass(exClass);
Constructor constr = klass.getConstructor(String.class);
toThrow = (Exception) constr.newInstance(exMsg);
  } catch (Exception ex) {
toThrow = new IOException(String.format(
"HTTP status [%d], exception [%s], message [%s] ",
conn.getResponseCode(), exClass, exMsg));
  }
} else {
  String msg = (exMsg != null) ? exMsg : conn.getResponseMessage();
  toThrow = new IOException(String.format(
  "HTTP status [%d], message [%s]", conn.getResponseCode(), msg));
}
  } catch (Exception ex) {
toThrow = new IOException(String.format( <-- here
"HTTP status [%d], message [%s]", conn.getResponseCode(),
conn.getResponseMessage()));
  }
{code}
If the an exception is thrown within the try block, the initial exception is 
swallowed, and it doesn't help debugging.
We had to cross reference this exception with the KMS server side to guess what 
happened.

IMHO the IOException thrown should also carry the initial exception. It should 
also print exClass and exMsg. It probably failed to instantiate an exception 
class.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14384) Reduce the visibility of {{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable

2017-05-04 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14384:
--

 Summary: Reduce the visibility of 
{{FileSystem#newFSDataOutputStreamBuilder}} before the API becomes stable
 Key: HADOOP-14384
 URL: https://issues.apache.org/jira/browse/HADOOP-14384
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.9.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


Before {{HADOOP-14365}} finishes, we should limit the this API within Hadoop 
project to prevent it being used by end users or other projects.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-14207.
-
Resolution: Fixed

I've pushed the fix to branch-2.8 and branch-2.8.1. Thanks!

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997571#comment-15997571
 ] 

Xiaoyu Yao commented on HADOOP-14207:
-

[~djp] and [~surendrasingh], the issue turns out to be an issue with my local 
repo. I will commit the cherry-pick to branch-2.8 and branch-2.8.1 soon.

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11252) RPC client does not time out by default

2017-05-04 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11252:
--
Release Note: 
This fix includes public method interface change.
A follow-up JIRA issue for this incompatibility for branch-2.7 is HADOOP-13579.

  was:
This fix includes public method interface change.
A follow-up jira for this incompatibly for branch-2.7 is HADOOP-13579.


> RPC client does not time out by default
> ---
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Masatake Iwasaki
>Priority: Critical
> Fix For: 2.8.0, 2.7.3, 2.6.4, 3.0.0-alpha1
>
> Attachments: HADOOP-11252.002.patch, HADOOP-11252.003.patch, 
> HADOOP-11252.004.patch, HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14383) Implement a FileSystem that reads from HTTP

2017-05-04 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997512#comment-15997512
 ] 

Haohui Mai commented on HADOOP-14383:
-

Correct. Moving it under hadoop common.

> Implement a FileSystem that reads from HTTP
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997491#comment-15997491
 ] 

Hadoop QA commented on HADOOP-14382:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-kafka in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14382 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866479/HADOOP-14382.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4cd2d65e9ed4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 25f5d9a |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12240/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12240/testReport/ |
| modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-kafka U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12240/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


[jira] [Updated] (HADOOP-13760) S3Guard: add delete tracking

2017-05-04 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13760:
---
Attachment: HADOOP-13760-HADOOP-13345.005.patch

Thanks for the analysis, [~fabbri]. The really helpful thing is remembering 
that not consistently listing everything for a rename did exist before. I 
changed LeafNodesIterator to only return things it KNOWS are leaf nodes, and 
relies on S3 to list everything else. So brand new empty directories may still 
get missed in a rename, but files (even brand new ones), older directories, and 
deleted stuff gets handled correctly. And it shouldn't perform too badly like 
some solutions I had considered. Strictly an improvement. So I'm pretty happy 
with this version of the patch - except still having some test issues...

I've been seeing ITestS3AEncryptionSSE.testEncryptionOverRename and 
ITestS3AContractRootDir.testRecursiveRootListing fail like I was earlier. They 
fail when running all tests (not even in parallel) but not when run 
individually. So it seems like an unrelated test issue, but doesn't seem to 
happen without my change. Also seeing 
ITestS3AEmptyDirectory.testDirectoryBecomesEmpty and a few tests in 
ITestS3AFileContextURI fail, but the circumstances are also inconsistent and 
they don't seem related to my changes. Will investigate more...

> S3Guard: add delete tracking
> 
>
> Key: HADOOP-13760
> URL: https://issues.apache.org/jira/browse/HADOOP-13760
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
> Attachments: HADOOP-13760-HADOOP-13345.001.patch, 
> HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, 
> HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> delete tracking.
> Current behavior on delete is to remove the metadata from the MetadataStore.  
> To make deletes consistent, we need to add a {{isDeleted}} flag to 
> {{PathMetadata}} and check it when returning results from functions like 
> {{getFileStatus()}} and {{listStatus()}}.  In HADOOP-13651, I added TODO 
> comments in most of the places these new conditions are needed.  The work 
> does not look too bad.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997478#comment-15997478
 ] 

Junping Du commented on HADOOP-14207:
-

Thanks [~surendrasingh] for working out a patch and [~xyao] for review.
Reopen for 2.8.1. [~surendrasingh], can you provide a patch for 2.8.1 as well 
(patch name with branch-2.8.1 as postfix)?

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-14383) Implement a FileSystem that reads from HTTP

2017-05-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai moved YARN-6556 to HADOOP-14383:
---

Key: HADOOP-14383  (was: YARN-6556)
Project: Hadoop Common  (was: Hadoop YARN)

> Implement a FileSystem that reads from HTTP
> ---
>
> Key: HADOOP-14383
> URL: https://issues.apache.org/jira/browse/HADOOP-14383
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> We have a use case where YARN applications would like to localize resources 
> from Artifactory. Putting the resources on HDFS itself might not be ideal as 
> we would like to leverage Artifactory to manage different versions of the 
> resources.
> It would be nice to have something like {{HttpFileSystem}} that implements 
> the Hadoop filesystem API and reads from a HTTP endpoint.
> Note that Samza has implemented the proposal by themselves:
> https://github.com/apache/samza/blob/master/samza-yarn/src/main/scala/org/apache/samza/util/hadoop/HttpFileSystem.scala
> The downside of this approach is that it requires the YARN cluster to put the 
> Samza jar into the classpath for each NM.
> It would be much nicer for Hadoop to have this feature built-in.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14207) "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler

2017-05-04 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reopened HADOOP-14207:
-

> "dfsadmin -refreshCallQueue" fails with DecayRpcScheduler
> -
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14207.001.patch, HADOOP-14207.002.patch, 
> HADOOP-14207.003.patch, HADOOP-14207.004.patch, HADOOP-14207.005.patch, 
> HADOOP-14207.006.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997420#comment-15997420
 ] 

Hudson commented on HADOOP-14380:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11685 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11685/])
HADOOP-14380. Make the Guava version Hadoop which builds with (jlowe: rev 
61858a5c378da75aff9cde84d418af46d718d08b)
* (edit) hadoop-project/pom.xml


> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14380:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Thanks to Steve for the contribution and to Andrew for additional review!  I 
committed this to trunk.

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-04 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997360#comment-15997360
 ] 

Thomas Demoor commented on HADOOP-9565:
---

This ticket has been worked on by multiple people.

Steve made a foundation to "detect" objectstores and expose their semantics.
We added a "directoutputcommitter", I think this is now made redundant by 
HADOOP-13786 so you should definitely take that out. iirc we also made some 
distcp hacks which might need to go out.

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-008.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997304#comment-15997304
 ] 

Andrew Wang commented on HADOOP-14380:
--

+1 from me too, I did the toStringHelper conversions over on HADOOP-14382. I'd 
like to stick with the old Guava until we can get the shading resolved.

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997302#comment-15997302
 ] 

Andrew Wang commented on HADOOP-14382:
--

[~ste...@apache.org] / [~ozawa] could you take a look? Hoping this improves 
cross-guava compatibility.

> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14382:
-
Status: Patch Available  (was: Open)

> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14382:
-
Attachment: HADOOP-14382.001.patch

Mechanical patch attached.

> Remove usages of MoreObjects.toStringHelper
> ---
>
> Key: HADOOP-14382
> URL: https://issues.apache.org/jira/browse/HADOOP-14382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-14382.001.patch
>
>
> MoreObjects.toStringHelper is a source of incompatibility across Guava 
> versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14382) Remove usages of MoreObjects.toStringHelper

2017-05-04 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14382:


 Summary: Remove usages of MoreObjects.toStringHelper
 Key: HADOOP-14382
 URL: https://issues.apache.org/jira/browse/HADOOP-14382
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0-alpha2
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


MoreObjects.toStringHelper is a source of incompatibility across Guava 
versions. Let's move off of this to a native Java 8 API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997278#comment-15997278
 ] 

Hadoop QA commented on HADOOP-13921:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-client-modules/hadoop-client-runtime {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-client-modules/hadoop-client-runtime {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-mapreduce-client-core in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
57s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866447/HADOOP-13921.0.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs 

[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997218#comment-15997218
 ] 

Hadoop QA commented on HADOOP-9565:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-tools/hadoop-azure in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 17 new + 149 unchanged 
- 12 fixed = 166 total (was 161) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
53s{color} | {color:red} hadoop-common-project_hadoop-common generated 14 new + 
0 unchanged - 0 fixed = 14 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
41s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not

[jira] [Updated] (HADOOP-14365) Stabilise FileSystem builder-based create API

2017-05-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14365:
-
Target Version/s: 2.9.0, 3.0.0-alpha3  (was: 3.0.0-alpha3)

> Stabilise FileSystem builder-based create API 
> --
>
> Key: HADOOP-14365
> URL: https://issues.apache.org/jira/browse/HADOOP-14365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Lei (Eddy) Xu
>Priority: Blocker
>
> HDFS-11170 added a builder-based create API for file creation which has a few 
> issues to work out before it can be considered ready for use
> 1. There no specification in the filesystem.md of what it is meant to do, 
> which means there's no public documentation on expected behaviour except on 
> the Javadocs, which consists of the sentences "Create a new 
> FSDataOutputStreamBuilder for the file with path" and "Base of specific file 
> system FSDataOutputStreamBuilder".
> I propose:
> # Give the new method a relevant name rather than just define the return 
> type, e.g. {{createFile()}}. 
> # `Filesystem.md` to be extended with coverage of this method, and, sadly for 
> the authors, coverage of what the semantics of 
> {{FSDataOutputStreamBuilder.build()}} are.
> 2. There are only tests for HDFS and local, neither of them perfect. 
> Proposed: move to {{AbstractContractCreateTest}}, test for all filesystems, 
> fix tests and FS where appropriate. 
> 3. Add more tests to generate the failure conditions implied by the updated 
> filesystem spec. Eg. create over a an existing file, create over a directory, 
> create with negative buffer size, negative block size, empty dest path, etc, 
> etc. 
> This will clarify when precondition checks are made, as well as whether. For 
> example: should {{newFSDataOutputStreamBuilder()}} validate the path 
> immediately?
> 4. Add to {{FileContext}}.
> 5. Take the opportunity to look at the flaws in today's {{create()}} calls 
> and address them, rather than replicate. In particular, I'd like to end the 
> behaviour "create all parent dirs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14381) S3AUtils.translateException to map 503 reponse to => throttling failure

2017-05-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14381:
---

 Summary: S3AUtils.translateException to map 503 reponse to => 
throttling failure
 Key: HADOOP-14381
 URL: https://issues.apache.org/jira/browse/HADOOP-14381
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


When AWS S3 returns "503", it means that the overall set of requests on a part 
of an S3 bucket exceeds the permitted limit; the client(s) need to throttle 
back or away for some rebalancing to complete.

The aws SDK retries 3 times on a 503, but then throws it up. Our code doesn't 
do anything with that other than create a generic {{AWSS3IOException}}.

Proposed
* add a new exception, {{AWSOverloadedException}}
* raise it on a 503 from S3 (& for s3guard, on DDB complaints)
* have it include a link to a wiki page on the topic, as well as the path
* and any other diags

Code talking to S3 may then be able to catch this and choose to react. Some 
retry with exponential backoff is the obvious option. Failing, well, that could 
trigger task reattempts at that part of the query, then job retry —which will 
again fail, *unless the number of tasks run in parallel is reduced*

As this throttling is across all clients talking to the same part of a bucket, 
fixing it is potentially a high level option. We can at least start by 
reporting things better




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf

2017-05-04 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997161#comment-15997161
 ] 

Andrew Wang commented on HADOOP-13921:
--

FWIW I grepped all of CDH for DEFAULT_LOG_LEVEL and it's not referred to 
anywhere besides Hadoop.

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13921.0.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13921) Remove Log4j classes from JobConf

2017-05-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13921:
-
Status: Patch Available  (was: Open)

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13921.0.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13921) Remove Log4j classes from JobConf

2017-05-04 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-13921:
-
Attachment: HADOOP-13921.0.patch

-00
  - replace DEFAULT_LOG_LEVEL with equivalent value already present in MR 
internals.

I have to test this still, but looks like it should be a simple change to use 
an existing string representation. I can't find anything internal to Hadoop 
that actually made use of the Log4J Level directly.

> Remove Log4j classes from JobConf
> -
>
> Key: HADOOP-13921
> URL: https://issues.apache.org/jira/browse/HADOOP-13921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-13921.0.patch
>
>
> Replace the use of log4j classes from JobConf so that the dependency is not 
> needed unless folks are making use of our custom log4j appenders or loading a 
> logging bridge to use that system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14376) Memory leak when reading a compressed file using the native library

2017-05-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned HADOOP-14376:
---

Assignee: Eli Acherkan

> Memory leak when reading a compressed file using the native library
> ---
>
> Key: HADOOP-14376
> URL: https://issues.apache.org/jira/browse/HADOOP-14376
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, io
>Affects Versions: 2.7.0
>Reporter: Eli Acherkan
>Assignee: Eli Acherkan
> Attachments: Bzip2MemoryTester.java, log4j.properties
>
>
> Opening and closing a large number of bzip2-compressed input streams causes 
> the process to be killed on OutOfMemory when using the native bzip2 library.
> Our initial analysis suggests that this can be caused by 
> {{DecompressorStream}} overriding the {{close()}} method, and therefore 
> skipping the line from its parent: 
> {{CodecPool.returnDecompressor(trackedDecompressor)}}. When the decompressor 
> object is a {{Bzip2Decompressor}}, its native {{end()}} method is never 
> called, and the allocated memory isn't freed.
> If this analysis is correct, the simplest way to fix this bug would be to 
> replace {{in.close()}} with {{super.close()}} in {{DecompressorStream}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-9565:
--
Attachment: HADOOP-9565-008.patch

Rebased. Should revisit the features each file system supports.

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-008.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-04 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996980#comment-15996980
 ] 

stack commented on HADOOP-14284:


bq. However, I found one problem in this approach: shaded artifacts(shaded 
Guava and Curator) in hadoop-shaded-thirdparty is NOT in classpath, if I 
understand correctly. 

Tell us more please. Shading bundles the relocated .class files of guava and 
curator; they are included in the thirdparty jar... and the thirdparty jar is 
on the classpath, no?

Perhaps you are referring to the downsides listed in the comment 'stack added a 
comment - 06/Apr/17 00:57' over in HADOOP-13363 where IDEs will not be able to 
find the shaded imports? For this reason hbase-protocol-shaded includes 
relocated src (it does a build because we patch protobuf3).

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14379) In federation mode,"hdfs dfsadmin -report" command can not be used

2017-05-04 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996888#comment-15996888
 ] 

Erik Krogen commented on HADOOP-14379:
--

Hi [~lixinglong], seems like a useful patch. I don't like seeing 
ViewFileSystem-specific logic being added to FileSystem; that logic should live 
within ViewFileSystem itself. Also, pulling URIs directly from the 
{{dfs.nameservices}} config is definitely not the right approach here; 
different ViewFileSystem instances may have different backing HDFS instances. 
You should instead use {{ViewFileSystem#getChildFileSystems()}} which will 
directly return the backing HDFS instances without any URI manipulation.

> In federation mode,"hdfs dfsadmin -report" command can not be used
> --
>
> Key: HADOOP-14379
> URL: https://issues.apache.org/jira/browse/HADOOP-14379
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: lixinglong
> Attachments: HADOOP-14379.001.patch
>
>
> In federation mode,"hdfs dfsadmin -report" command can not be used,as follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> report: FileSystem viewfs://nsX/ is not an HDFS file system
> Usage: hdfs dfsadmin [-report] [-live] [-dead] [-decommissioning]
> After adding new features,"hdfs dfsadmin -report" command can be used,as 
> follows:
> hdfs@zdh102:~> hdfs dfsadmin -report
> hdfs://nameservice
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 579556651008 (539.75 GB)
> DFS Used: 147456 (144 KB)
> Non DFS Used: 211151990784 (196.65 GB)
> DFS Remaining: 368404512768 (343.10 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 63.57%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> Name: 10.43.183.103:50010 (zdh103)
> Hostname: zdh103
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68901670912 (64.17 GB)
> DFS Remaining: 193527250944 (180.24 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.74%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:42 CST 2017
> Name: 10.43.183.104:50010 (zdh104)
> Hostname: zdh104
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 262429138944 (244.41 GB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 68800688128 (64.08 GB)
> DFS Remaining: 193628233728 (180.33 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 73.78%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 10:44:41 CST 2017
> hdfs://nameservice1
> Configured Capacity: 2682939863040 (2.44 TB)
> Present Capacity: 2170811387904 (1.97 TB)
> DFS Remaining: 2170810589184 (1.97 TB)
> DFS Used: 798720 (780 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
> -
> Live datanodes (4):
> Name: 10.43.156.126:50010 (zdh126)
> Hostname: zdh126
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 1578524934144 (1.44 TB)
> DFS Used: 217088 (212 KB)
> Non DFS Used: 163274125312 (152.06 GB)
> DFS Remaining: 1415250591744 (1.29 TB)
> DFS Used%: 0.00%
> DFS Remaining%: 89.66%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 2
> Last contact: Thu May 04 12:36:06 CST 2017
> Name: 10.43.183.102:50010 (zdh102)
> Hostname: zdh102
> Rack: /default
> Decommission Status : Normal
> Configured Capacity: 5795566510

[jira] [Commented] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996759#comment-15996759
 ] 

Hadoop QA commented on HADOOP-14380:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14380 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866404/HADOOP-14380-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 8ffce6e89f64 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 81092b1 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12237/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12237/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache

[jira] [Commented] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996757#comment-15996757
 ] 

Jason Lowe commented on HADOOP-14380:
-

Thanks, Steve!  +1 lgtm pending Jenkins.

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14380:

Attachment: HADOOP-14380-001.patch

Patch 001; guava.version is now a property.

the min version you can actually build with is 0.19, but at least now you get 
some option to experiment.

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14380:

Status: Patch Available  (was: Open)

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14380-001.patch
>
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14380) Make the Guava version Hadoop which builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14380:

Summary: Make the Guava version Hadoop which builds with configurable  
(was: Make Guava version Hadoop builds with configurable)

> Make the Guava version Hadoop which builds with configurable
> 
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14380) Make Guava version Hadoop builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996725#comment-15996725
 ] 

Steve Loughran edited comment on HADOOP-14380 at 5/4/17 1:30 PM:
-

FWIW, the min version hadoop-common builds against is 19.0, because of the move 
from Objects to MoreObjects forced by the java 8 migration. Hadoop trunk is 
java 8+, so 19.0+ it is. unless someone moves the toString() calls in 
MetricsRegistry off Guava. Which isn't hard, and keeps guava brittleness down

{code}
[INFO] Finished at: 2017-05-04T14:24:17+01:00
[INFO] Final Memory: 106M/874M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-common: Compilation failure: Compilation failure:
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsRegistry.java:[25,30]
 cannot find symbol
[ERROR] symbol:   class MoreObjects
[ERROR] location: package com.google.common.base
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsTag.java:[21,30]
 cannot find symbol
{code}


was (Author: ste...@apache.org):
FWIW, the min version hadoop-common builds against is 18.0, because of the move 
from Objects to MoreObjects forced by the java 8 migration. Hadoop trunk is 
java 8+, so 18.0+ it is. unless someone moves the toString() calls in 
MetricsRegistry off Guava. Which isn't hard, and keeps guava brittleness down

{code}
[INFO] Finished at: 2017-05-04T14:24:17+01:00
[INFO] Final Memory: 106M/874M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-common: Compilation failure: Compilation failure:
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsRegistry.java:[25,30]
 cannot find symbol
[ERROR] symbol:   class MoreObjects
[ERROR] location: package com.google.common.base
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsTag.java:[21,30]
 cannot find symbol
{code}

> Make Guava version Hadoop builds with configurable
> --
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14380) Make Guava version Hadoop builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996725#comment-15996725
 ] 

Steve Loughran commented on HADOOP-14380:
-

FWIW, the min version hadoop-common builds against is 18.0, because of the move 
from Objects to MoreObjects forced by the java 8 migration. Hadoop trunk is 
java 8+, so 18.0+ it is. unless someone moves the toString() calls in 
MetricsRegistry off Guava. Which isn't hard, and keeps guava brittleness down

{code}
[INFO] Finished at: 2017-05-04T14:24:17+01:00
[INFO] Final Memory: 106M/874M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-common: Compilation failure: Compilation failure:
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MetricsRegistry.java:[25,30]
 cannot find symbol
[ERROR] symbol:   class MoreObjects
[ERROR] location: package com.google.common.base
[ERROR] 
/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsTag.java:[21,30]
 cannot find symbol
{code}

> Make Guava version Hadoop builds with configurable
> --
>
> Key: HADOOP-14380
> URL: https://issues.apache.org/jira/browse/HADOOP-14380
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Make the choice of guava version Hadoop builds with configurable, so people 
> building Hadoop 3 alphas can build with an older version and so cause less 
> unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14380) Make Guava version Hadoop builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14380:
---

 Summary: Make Guava version Hadoop builds with configurable
 Key: HADOOP-14380
 URL: https://issues.apache.org/jira/browse/HADOOP-14380
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha3
Reporter: Steve Loughran
Assignee: Steve Loughran


Make the choice of guava version Hadoop builds with configurable, so people 
building Hadoop 3 alphas can build with an older version and so cause less 
unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996685#comment-15996685
 ] 

Steve Loughran commented on HADOOP-10101:
-

I'm going to create a JIRA so that we can build Hadoop trunk against older 
versions of Guava; that way andrew can build the release with an older version 
and we can see what breaks. 

I'm concluding that even if we ship with a newer guava v., we should build with 
something older (18?), so that the signatures of methods compiled against all 
match there. When you build with 20, overloads of checkArgument() are enough to 
stop existing code linking to older guava versions, *even if the method call 
hasn't changed*

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.014.patch, 
> HADOOP-10101.015.patch, HADOOP-10101.016.patch, HADOOP-10101.017.patch, 
> HADOOP-10101.018.patch, HADOOP-10101.patch, HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This issue tries to 
> update the version to as latest version as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996552#comment-15996552
 ] 

Steve Loughran edited comment on HADOOP-14233 at 5/4/17 11:07 AM:
--

BTW, this is causing problems linking against old guava versions; the relevant 
method in Guava is 20.0+; any app with an older guava version on their CP is 
going to see a stack trace here.

I'm wondering if, short term, I can change it to do the check/string construct 
ourselves



was (Author: ste...@apache.org):
BTW, this is causing problems linking against old guava versions; the relevant 
method in Guava is 20.0+; any app with an older guava version on their CP is 
going to see a stack trace here.


> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14233.1.patch
>
>
> The String in the precondition check is constructed prior to failure 
> detection. Since the normal case is no error, we can gain performance by 
> delaying the construction of the string until the failure is detected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996552#comment-15996552
 ] 

Steve Loughran commented on HADOOP-14233:
-

BTW, this is causing problems linking against old guava versions; the relevant 
method in Guava is 20.0+; any app with an older guava version on their CP is 
going to see a stack trace here.


> Delay construction of PreCondition.check failure message in Configuration#set
> -
>
> Key: HADOOP-14233
> URL: https://issues.apache.org/jira/browse/HADOOP-14233
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14233.1.patch
>
>
> The String in the precondition check is constructed prior to failure 
> detection. Since the normal case is no error, we can gain performance by 
> delaying the construction of the string until the failure is detected.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996548#comment-15996548
 ] 

Steve Loughran commented on HADOOP-13887:
-

If you look at HDFS-11644 we're discussing how to make the capabilities of a 
stream discoverable on demand.

but actually the real issue is more fundamental. I expect to be able to get the 
length of a file
{code}
status = fs.getFileStatus(path)
{code}

create a buffer from it
{code}
buffer = new byte[status.getLen()]
{code}
and then read that in
{code}
s = fs.open(path)
s.readFully(0, buffer)
{code}

(or do the same in a for() loop)

That runs through a lot of the code: the length of the file is used to 
determine the followon actions, rather than just read() until a -1 is returned.

I don't really know what we can do here to address the mismatch, except in the 
special cases in the code where we can look at it and see if we can address the 
situation "the file is shorter than we thought". I'd look at distcp here, 
because at a quick scan, it may fail on the mismatch, and its the foundational 
one you could use to bootstrap: copy encrypted data down locally, work on it, 
push things back later.

I don't know enough about the HDFS crypto stuff to see how that would link in. 
I'd suggest you subscribe to the hdfs-dev mailing list and start the topic of 
conversation there.

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996535#comment-15996535
 ] 

Steve Loughran commented on HADOOP-9565:


It's languished unloved for a long time. 

I'd thought of a marker interface to say "this is not a real FS", but as you 
know, the behaviour of a blobstore can very between implementations, and indeed 
actual deployments.

in HDFS-11644 there's work going on to add a way to probe a stream for having 
specific capabilities, this would be an equivalent. 

Let me revisit this on friday and see if I can bring it up in line with my 
current thinking. 

> Add a Blobstore interface to add to blobstore FileSystems
> -
>
> Key: HADOOP-9565
> URL: https://issues.apache.org/jira/browse/HADOOP-9565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Thomas Demoor
> Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
> HADOOP-9565-003.patch, HADOOP-9565-004.patch, HADOOP-9565-005.patch, 
> HADOOP-9565-006.patch, HADOOP-9565-branch-2-007.patch
>
>
> We can make the fact that some {{FileSystem}} implementations are really 
> blobstores, with different atomicity and consistency guarantees, by adding a 
> {{Blobstore}} interface to add to them. 
> This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
> all blobstores implement at server-side copy operation as a substitute for 
> rename.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13453) S3Guard: Instrument new functionality with Hadoop metrics.

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996530#comment-15996530
 ] 

Steve Loughran commented on HADOOP-13453:
-

Why not take that list, create a new JIRA off HADOOP-13345 "add more s3guard 
metrics" and suggest those as the start.

One interesting one to see if we could detect would be mismatches between 
s3guard and the underlying object store: if we can observe inconsistencies 
(how?) then that should be measured. The S3mper blog posts looks at how netflix 
detected consistency issues in S3 that way

> S3Guard: Instrument new functionality with Hadoop metrics.
> --
>
> Key: HADOOP-13453
> URL: https://issues.apache.org/jira/browse/HADOOP-13453
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Ai Deng
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13453-HADOOP-13345-001.patch, 
> HADOOP-13453-HADOOP-13345-002.patch, HADOOP-13453-HADOOP-13345-003.patch, 
> HADOOP-13453-HADOOP-13345-004.patch, HADOOP-13453-HADOOP-13345-005.patch
>
>
> Provide Hadoop metrics showing operational details of the S3Guard 
> implementation.
> The metrics will be implemented in this ticket:
> ● S3GuardRechecksNthPercentileLatency (MutableQuantiles) ­​ Percentile time 
> spent
> in rechecks attempting to achieve consistency. Repeated for multiple 
> percentile values
> of N.  This metric is an indicator of the additional latency cost of running 
> S3A with
> S3Guard.
> ● S3GuardRechecksNumOps (MutableQuantiles) ­​ Number of times a consistency
> recheck was required while attempting to achieve consistency.
> ● S3GuardStoreNthPercentileLatency (MutableQuantiles) ­​ Percentile time 
> spent in
> operations against the consistent store, including both write operations 
> during file system
> mutations and read operations during file system consistency checks. Repeated 
> for
> multiple percentile values of N. This metric is an indicator of latency to 
> the consistent
> store implementation.
> ● S3GuardConsistencyStoreNumOps (MutableQuantiles) ­​ Number of operations
> against the consistent store, including both write operations during file 
> system mutations
> and read operations during file system consistency checks.
> ● S3GuardConsistencyStoreFailures (MutableCounterLong) ­​ Number of failures
> during operations against the consistent store implementation.
> ● S3GuardConsistencyStoreTimeouts (MutableCounterLong) ­​ Number of timeouts
> during operations against the consistent store implementation.
> ● S3GuardInconsistencies (MutableCounterLong) ­ C​ ount of times S3Guard 
> failed to
> achieve consistency, even after exhausting all rechecks. A high count may 
> indicate
> unexpected out­of­band modification of the S3 bucket contents, such as by an 
> external
> tool that does not make corresponding updates to the consistent store.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-05-04 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996522#comment-15996522
 ] 

Steve Loughran commented on HADOOP-13714:
-

assume things like {{hadoop fs -ls hfds://nn1:/temp}} will be parsed through 
some shell script, even if just grep and awk

This is where windows PowerShell is actually slick: you can chain together more 
than just text and expect piped things to work. 





> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-13714.WIP-001.patch
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-05-04 Thread Igor Mazur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996504#comment-15996504
 ] 

Igor Mazur commented on HADOOP-13887:
-

As I understand, the main problem is that FSInputStream implements Seekable and 
PositionedReadable by default. So every other code was written from this 
assumption.
Can't evaluate is it a good assumption or not, for all cases :) But looks like 
making this part more flexible - is an enormous amount of coding and testing.

So, maybe we need to try another approach - return file as-is from S3 but also 
return metadata, that includes a type of encryption and encrypted CEK and 
decrypt the file on higher layers.  I see classes with names CryptoInputStream, 
etc. Haven't looked how they work yet - just idea. 
The biggest problem with this approach - is a duplication of 
encryption/decryption logic from AWS SDK. Looks like it will be hard to reuse 
same classes from SDK - because encryption/decryption tightly linked with 
getting/putting objects from S3 there.


> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-05-04 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15996429#comment-15996429
 ] 

Tsuyoshi Ozawa commented on HADOOP-14284:
-

[~vinodkv] and [~djp], thanks a lot for your feedback.

> A couple of questions: can we just shade some client jars instead of 
> everywhere? 
> As we keep doing this for other libraries, I'm concerned if our code becomes 
> more brittle (changing imports everywhere) and if the build times explode.

We're now doing to shade Guava and Curator in hadoop-shaded-thirdparty, and 
trying to import it from hadoop-* projects. The build time of Hadoop doesn't 
get increased so much with this approach because of just referring to 
hadoop-shaded-thirdparty project from hadoop-* projects. However, I found one 
problem in this approach: shaded artifacts(shaded Guava and Curator) in 
hadoop-shaded-thirdparty is NOT in classpath, if I understand correctly. To go 
with this approach, we need to unzip source code and compile it like HBase does 
in hbase-protocol-shaded. This can make Hadoop build fragile and the build time 
of Hadoop can increase as Junping Vinod mentioned. 

https://github.com/apache/hbase/blob/7700a7fac1262934fe538a96b040793c6ff171ce/hbase-protocol-shaded/pom.xml#L321

Gradle seems to have a feature to do this.

http://stackoverflow.com/questions/26244936/how-to-include-only-project-and-relocated-classes-when-using-gradle-shadow-plugi

> Isn't it better to just shade our final artifacts instead of shading 
> individual libraries' jars? 

Do you mean that we prepare new project "hadoop-server-modules" and shading 
Guava and Curator inside them like hadoop-client-modules? It sounds better 
approach to me. By adding skipShade option here, we can overcome build time 
problem. [~andrew.wang] [~busbey] [~ajisakaa] What do you think?

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org