[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-06-21 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14475:

Status: Open  (was: Patch Available)

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread hewenxin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hewenxin updated HADOOP-14549:
--
Comment: was deleted

(was: PS:  I leave {{org.apache.log4j.Logger.setLevel}} in 
{{org.apache.hadoop.tools.rumen.datatypes.util.MapReduceJobPropertiesParser}} 
unchanged for it's not test code and should not dependent on 
{{GenericTestUtils}}.
I searched in the whole project, only found this one not in test code.
So in this situation should I leave it unchanged or do something that makes 
migration easier?)

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058752#comment-16058752
 ] 

wenxin he commented on HADOOP-14549:


PS: I leave {{org.apache.log4j.Logger.setLevel}} in 
{{org.apache.hadoop.tools.rumen.datatypes.util.MapReduceJobPropertiesParser}} 
unchanged for it's not test code and should not dependent on GenericTestUtils.
I searched in the whole project, only found this one not in test code.
So in this situation should I leave it unchanged or do something that makes 
migration easier?

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread hewenxin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058748#comment-16058748
 ] 

hewenxin commented on HADOOP-14549:
---

PS:  I leave {{org.apache.log4j.Logger.setLevel}} in 
{{org.apache.hadoop.tools.rumen.datatypes.util.MapReduceJobPropertiesParser}} 
unchanged for it's not test code and should not dependent on 
{{GenericTestUtils}}.
I searched in the whole project, only found this one not in test code.
So in this situation should I leave it unchanged or do something that makes 
migration easier?

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Affects Version/s: 2.9.0
   3.0.0-alpha3
 Target Version/s: 3.0.0-alpha3, 2.9.0
   Status: Patch Available  (was: Open)

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3, 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14396) Add builder interface to FileContext

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14396:
---
Attachment: HADOOP-14396.00.patch

Upload the patch to add {{FileContext#create(Path)}} and related tests.

> Add builder interface to FileContext
> 
>
> Key: HADOOP-14396
> URL: https://issues.apache.org/jira/browse/HADOOP-14396
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14396.00.patch
>
>
> Add builder interface for {{FileContext#create}} and {{FileContext#append}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread wenxin he (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenxin he updated HADOOP-14549:
---
Comment: was deleted

(was: [~ajisakaa], I attached a demo patch to replace 
{{org.apache.log4j.Logger.setLevel}} in hadoop-tools, please kindly review. If 
this patch OK, I'll do replacing in other modules. Thanks!)

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058680#comment-16058680
 ] 

wenxin he commented on HADOOP-14549:


[~ajisakaa], I attached a demo patch to replace 
{{org.apache.log4j.Logger.setLevel}} in hadoop-tools, please kindly review. If 
this patch OK, I'll do replacing in other modules. Thanks!

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread wenxin he (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wenxin he updated HADOOP-14549:
---
Attachment: HADOOP-14549.hadoop-tools.example.patch

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
> Attachments: HADOOP-14549.hadoop-tools.example.patch
>
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-21 Thread wenxin he (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058677#comment-16058677
 ] 

wenxin he commented on HADOOP-14549:


[~ajisakaa], I attached a demo patch to replace 
{{org.apache.log4j.Logger.setLevel}} in hadoop-tools, please kindly review. If 
this patch OK, I'll do replacing in other modules. Thanks!

> Use GenericTestUtils.setLogLevel when available
> ---
>
> Key: HADOOP-14549
> URL: https://issues.apache.org/jira/browse/HADOOP-14549
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: wenxin he
>
> Based on Brahma's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
>  in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
> to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058645#comment-16058645
 ] 

Hadoop QA commented on HADOOP-14520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 
75 new + 111 unchanged - 4 fixed = 186 total (was 115) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-tools/hadoop-azure generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  org.apache.hadoop.fs.azure.BlockBlobAppendStream.commitAppendBlocks() 
calls Thread.sleep() with a lock held  At BlockBlobAppendStream.java:lock held  
At BlockBlobAppendStream.java:[line 521] |
|  |  org.apache.hadoop.fs.azure.BlockBlobAppendStream.getBufferSize() is 
unsynchronized, 
org.apache.hadoop.fs.azure.BlockBlobAppendStream.setBufferSize(int) is 
synchronized  At BlockBlobAppendStream.java:synchronized  At 
BlockBlobAppendStream.java:[line 302] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873986/HADOOP-14520-03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b1786b7fc93a 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c22cf00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12595/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12595/artifact/patchprocess/whitespace-eol.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12595/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-azure.html
 |
| javadoc | 

[jira] [Commented] (HADOOP-14527) ITestS3GuardListConsistency is too slow

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058610#comment-16058610
 ] 

Hadoop QA commented on HADOOP-14527:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
18s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14527 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873979/HADOOP-14527-HADOOP-13345.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 052f24fa4911 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 0db7176 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12594/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12594/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12594/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12594/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ITestS3GuardListConsistency is too slow
> ---
>
> Key: HADOOP-14527
> URL: https://issues.apache.org/jira/browse/HADOOP-14527
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058601#comment-16058601
 ] 

Tsuyoshi Ozawa commented on HADOOP-14284:
-

Sorry for the delay.

I would like to wrap up all opinions discussed here before moving forward. 
After that, I would like to continue to create a patch based on the approach.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14284) Shade Guava everywhere

2017-06-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14284:
-
Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha4)

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API

2017-06-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14238:
-
Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha4)

> [Umbrella] Rechecking Guava's object is not exposed to user-facing API
> --
>
> Key: HADOOP-14238
> URL: https://issues.apache.org/jira/browse/HADOOP-14238
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>
> This is reported by [~hitesh] on HADOOP-10101.
> At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2017-06-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13363:
-
Target Version/s: 3.0.0-beta1  (was: 3.0.0-alpha4)

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058579#comment-16058579
 ] 

Andrew Wang commented on HADOOP-14284:
--

I've bumped this out to beta1, which is truly last call. Let's figure out our 
shading story by then.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Patch Available  (was: Open)

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-03.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Attachment: HADOOP-14520-03.patch

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-03.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058563#comment-16058563
 ] 

Hadoop QA commented on HADOOP-14520:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 
76 new + 111 unchanged - 4 fixed = 187 total (was 115) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-tools/hadoop-azure generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  org.apache.hadoop.fs.azure.BlockBlobAppendStream.commitAppendBlocks() 
calls Thread.sleep() with a lock held  At BlockBlobAppendStream.java:lock held  
At BlockBlobAppendStream.java:[line 522] |
|  |  org.apache.hadoop.fs.azure.BlockBlobAppendStream.getBufferSize() is 
unsynchronized, 
org.apache.hadoop.fs.azure.BlockBlobAppendStream.setBufferSize(int) is 
synchronized  At BlockBlobAppendStream.java:synchronized  At 
BlockBlobAppendStream.java:[line 303] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14520 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873970/HADOOP-14520-02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 858c162f95e0 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c22cf00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12593/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12593/artifact/patchprocess/whitespace-eol.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12593/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-azure.html
 |
| javadoc | 

[jira] [Commented] (HADOOP-14566) Add seek support for SFTP FileSystem

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058560#comment-16058560
 ] 

Hadoop QA commented on HADOOP-14566:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 4 unchanged - 0 fixed = 6 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 
17 unchanged - 0 fixed = 19 total (was 17) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.sftp.SFTPInputStream.pos; locked 66% of time  
Unsynchronized access at SFTPInputStream.java:66% of time  Unsynchronized 
access at SFTPInputStream.java:[line 67] |
|  |  org.apache.hadoop.fs.sftp.SFTPInputStream.seek(long) ignores result of 
java.io.InputStream.skip(long)  At SFTPInputStream.java: At 
SFTPInputStream.java:[line 66] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873962/HADOOP-14566.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 72c5ed7feb60 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c22cf00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12592/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12592/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 

[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Attachment: (was: HADOOP-14520-02.patch)

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Status: Open  (was: Patch Available)

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058469#comment-16058469
 ] 

Aaron Fabbri edited comment on HADOOP-14499 at 6/22/17 12:16 AM:
-

Yeah sorry for delay responding to your earlier comment.  I think we need a 
more rigorous definition of prune(), at least for my understanding.  Would be 
nice to write up the rules for prune, i.e. isAuthoritative (you have to clear 
bit on parent if you prune a child, etc) and have more contract tests around it.

Personally I'd just fix the findbugs warning in this JIRA. :-)

My previous concern was in deleting empty directories regardless of age: They 
do need to be present during the inconsistency window. i.e. 
getFileStatus(empty-dir-path) should see empty directories in MetadataStore 
until the consistency window (prune age) has elapsed.  Maybe I misread your 
original patch?

{quote}
Since we currently don't track the last time at which a directory listing was 
modified 
{quote}

Couldn't a MetadataStore track the time the entry was last written into the 
MetadataStore?

There are two "modification" times we can consider:

A. The time which an entry was (last) written into a MetadataStore
B. The FileStatus's modification time field.

My thinking is that prune is based on A (mostly because B is unreliable): The 
sliding inconsistency window.  It works nicely for both inconsistency and 
caching (unlike B).  For files, we can use the fact that A>=B.

Anyways, it seems to me that a MetadataStore could safely expire and/or prune 
directory metadata, so I'm not sure the comment change is needed?


was (Author: fabbri):
Yeah sorry for delay responding to your earlier comment.  I think we need a 
more rigorous definition of prune(), at least for my understanding.  Would be 
nice to write up the rules for isAuthoritative (you have to clear bit on parent 
if you prune a child, etc) and have more contract tests around it.

Personally I'd just fix the findbugs warning in this JIRA. :-)

My previous concern was in deleting empty directories regardless of age: They 
do need to be present during the inconsistency window. i.e. 
getFileStatus(empty-dir-path) should see empty directories in MetadataStore 
until the consistency window (prune age) has elapsed.  Maybe I misread your 
original patch?

{quote}
Since we currently don't track the last time at which a directory listing was 
modified 
{quote}

Couldn't a MetadataStore track the time the entry was last written into the 
MetadataStore?

There are two "modification" times we can consider:

A. The time which an entry was (last) written into a MetadataStore
B. The FileStatus's modification time field.

My thinking is that prune is based on A (mostly because B is unreliable): The 
sliding inconsistency window.  It works nicely for both inconsistency and 
caching (unlike B).  For files, we can use the fact that A>=B.

Anyways, it seems to me that a MetadataStore could safely expire and/or prune 
directory metadata, so I'm not sure the comment change is needed?

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14527) ITestS3GuardListConsistency is too slow

2017-06-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058488#comment-16058488
 ] 

Sean Mackrory commented on HADOOP-14527:


+1 pending Yetus' cooperation. I can commit later.

> ITestS3GuardListConsistency is too slow
> ---
>
> Key: HADOOP-14527
> URL: https://issues.apache.org/jira/browse/HADOOP-14527
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-14527-HADOOP-13345.001.patch, 
> HADOOP-14527-HADOOP-13345.002.patch, HADOOP-14527-HADOOP-13345.003.patch, 
> HADOOP-14527-HADOOP-13345.004.patch
>
>
> I'm really glad to see folks adopting the inconsistency injection stuff and 
> adding test cases to ITestS3GuardListConsistency.  That test class has become 
> very slow, however, due to {{Thread.sleep()}} calls that wait for the 
> inconsistency injection timers to expire, and nested loops that run numerous 
> permutations of the test cases.
> I will take a stab at speeding up this test class.  As is it takes about 8 
> minutes to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058485#comment-16058485
 ] 

Hadoop QA commented on HADOOP-13786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 41 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common in HADOOP-13345 
has 19 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m  2s{color} 
| {color:red} root generated 1 new + 787 unchanged - 1 fixed = 788 total (was 
788) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 48 new + 121 unchanged 
- 23 fixed = 169 total (was 144) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
10s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 44s{color} | 

[jira] [Updated] (HADOOP-14527) ITestS3GuardListConsistency is too slow

2017-06-21 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-14527:
--
Attachment: HADOOP-14527-HADOOP-13345.004.patch

Thanks for the review [~mackrorysd].  Attaching v4 patch that removes stopwatch 
timing of test cases.

> ITestS3GuardListConsistency is too slow
> ---
>
> Key: HADOOP-14527
> URL: https://issues.apache.org/jira/browse/HADOOP-14527
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-14527-HADOOP-13345.001.patch, 
> HADOOP-14527-HADOOP-13345.002.patch, HADOOP-14527-HADOOP-13345.003.patch, 
> HADOOP-14527-HADOOP-13345.004.patch
>
>
> I'm really glad to see folks adopting the inconsistency injection stuff and 
> adding test cases to ITestS3GuardListConsistency.  That test class has become 
> very slow, however, due to {{Thread.sleep()}} calls that wait for the 
> inconsistency injection timers to expire, and nested loops that run numerous 
> permutations of the test cases.
> I will take a stab at speeding up this test class.  As is it takes about 8 
> minutes to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058469#comment-16058469
 ] 

Aaron Fabbri commented on HADOOP-14499:
---

Yeah sorry for delay responding to your earlier comment.  I think we need a 
more rigorous definition of prune(), at least for my understanding.  Would be 
nice to write up the rules for isAuthoritative (you have to clear bit on parent 
if you prune a child, etc) and have more contract tests around it.

Personally I'd just fix the findbugs warning in this JIRA. :-)

My previous concern was in deleting empty directories regardless of age: They 
do need to be present during the inconsistency window. i.e. 
getFileStatus(empty-dir-path) should see empty directories in MetadataStore 
until the consistency window (prune age) has elapsed.  Maybe I misread your 
original patch?

{quote}
Since we currently don't track the last time at which a directory listing was 
modified 
{quote}

Couldn't a MetadataStore track the time the entry was last written into the 
MetadataStore?

There are two "modification" times we can consider:

A. The time which an entry was (last) written into a MetadataStore
B. The FileStatus's modification time field.

My thinking is that prune is based on A (mostly because B is unreliable): The 
sliding inconsistency window.  It works nicely for both inconsistency and 
caching (unlike B).  For files, we can use the fact that A>=B.

Anyways, it seems to me that a MetadataStore could safely expire and/or prune 
directory metadata, so I'm not sure the comment change is needed?

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14502) Confusion/name conflict between NameNodeActivity#BlockReportNumOps and RpcDetailedActivity#BlockReportNumOps

2017-06-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-14502:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha4
   Status: Resolved  (was: Patch Available)

> Confusion/name conflict between NameNodeActivity#BlockReportNumOps and 
> RpcDetailedActivity#BlockReportNumOps
> 
>
> Key: HADOOP-14502
> URL: https://issues.apache.org/jira/browse/HADOOP-14502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
>  Labels: Incompatible
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14502.000.patch, HADOOP-14502.001.patch, 
> HADOOP-14502.002.patch
>
>
> Currently the {{BlockReport(NumOps|AvgTime)}} metrics emitted under the 
> {{RpcDetailedActivity}} context and those emitted under the 
> {{NameNodeActivity}} context are actually reporting different things despite 
> having the same name. {{NameNodeActivity}} reports the count/time of _per 
> storage_ block reports, whereas {{RpcDetailedActivity}} reports the 
> count/time of _per datanode_ block reports. This makes for a confusing 
> experience with two metrics having the same name reporting different values. 
> We already have the {{StorageBlockReportsOps}} metric under 
> {{NameNodeActivity}}. Can we make {{StorageBlockReport}} a {{MutableRate}} 
> metric and remove {{NameNodeActivity#BlockReport}} metric? Open to other 
> suggestions about how to address this as well. The 3.0 release seems a good 
> time to make this incompatible change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14502) Confusion/name conflict between NameNodeActivity#BlockReportNumOps and RpcDetailedActivity#BlockReportNumOps

2017-06-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-14502:
---
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

Thanks Erik! +1 on v2 patch as well. Tested with {{MiniHadoopClusterManager}} 
and it shows desired behavior.
{code}
  }, {
"name" : "Hadoop:service=NameNode,name=NameNodeActivity",
"modelerType" : "NameNodeActivity",
"tag.ProcessName" : "NameNode",
"tag.SessionId" : null,
"tag.Context" : "dfs",
"tag.Hostname" : "zezhang-mn1",
"CreateFileOps" : 2,
"FilesCreated" : 12,
"FilesAppended" : 0,
"GetBlockLocations" : 0,
"FilesRenamed" : 0,
"FilesTruncated" : 0,
"GetListingOps" : 1,
"DeleteFileOps" : 0,
"FilesDeleted" : 0,
"FileInfoOps" : 6,
"AddBlockOps" : 2,
"GetAdditionalDatanodeOps" : 0,
"CreateSymlinkOps" : 0,
"GetLinkTargetOps" : 0,
"FilesInGetListingOps" : 0,
"AllowSnapshotOps" : 0,
"DisallowSnapshotOps" : 0,
"CreateSnapshotOps" : 0,
"DeleteSnapshotOps" : 0,
"RenameSnapshotOps" : 0,
"ListSnapshottableDirOps" : 0,
"SnapshotDiffReportOps" : 0,
"BlockReceivedAndDeletedOps" : 2,
"BlockOpsQueued" : 1,
"BlockOpsBatched" : 0,
"TransactionsNumOps" : 24,
"TransactionsAvgTime" : 1.7083,
"SyncsNumOps" : 14,
"SyncsAvgTime" : 0.2857142857142857,
"TransactionsBatchedInSync" : 10,
"StorageBlockReportNumOps" : 2,
"StorageBlockReportAvgTime" : 3.5,
"CacheReportNumOps" : 0,
"CacheReportAvgTime" : 0.0,
"GenerateEDEKTimeNumOps" : 0,
"GenerateEDEKTimeAvgTime" : 0.0,
"WarmUpEDEKTimeNumOps" : 0,
"WarmUpEDEKTimeAvgTime" : 0.0,
"ResourceCheckTimeNumOps" : 8,
"ResourceCheckTimeAvgTime" : 0.0,
"SafeModeTime" : 1,
"FsImageLoadTime" : 76,
"GetEditNumOps" : 0,
"GetEditAvgTime" : 0.0,
"GetImageNumOps" : 0,
"GetImageAvgTime" : 0.0,
"PutImageNumOps" : 0,
"PutImageAvgTime" : 0.0,
"TotalFileOps" : 11
  },
{code}

I'm committing to trunk soon. Let's write a short release note?

> Confusion/name conflict between NameNodeActivity#BlockReportNumOps and 
> RpcDetailedActivity#BlockReportNumOps
> 
>
> Key: HADOOP-14502
> URL: https://issues.apache.org/jira/browse/HADOOP-14502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
>  Labels: Incompatible
> Attachments: HADOOP-14502.000.patch, HADOOP-14502.001.patch, 
> HADOOP-14502.002.patch
>
>
> Currently the {{BlockReport(NumOps|AvgTime)}} metrics emitted under the 
> {{RpcDetailedActivity}} context and those emitted under the 
> {{NameNodeActivity}} context are actually reporting different things despite 
> having the same name. {{NameNodeActivity}} reports the count/time of _per 
> storage_ block reports, whereas {{RpcDetailedActivity}} reports the 
> count/time of _per datanode_ block reports. This makes for a confusing 
> experience with two metrics having the same name reporting different values. 
> We already have the {{StorageBlockReportsOps}} metric under 
> {{NameNodeActivity}}. Can we make {{StorageBlockReport}} a {{MutableRate}} 
> metric and remove {{NameNodeActivity#BlockReport}} metric? Open to other 
> suggestions about how to address this as well. The 3.0 release seems a good 
> time to make this incompatible change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Description: 
Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
call. When the number of blocks is above 32000, next hflush/hsync triggers the 
block compaction process. Block compaction replaces a sequence of blocks with 
one block. From all the sequences with total length less than 4M, compaction 
chooses the longest one. It is a greedy algorithm that preserve all potential 
candidates for the next round. Block Compaction for WASB increases data 
durability and allows using block blobs instead of page blobs. By default, 
block compaction is disabled. Similar to the configuration for page blobs, the 
client needs to specify HDFS folders where block compaction over block blobs is 
enabled. 

Results for HADOOP-14520-01.patch
tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



  was:
Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
call. When the number of blocks is above 32000, next hflush/hsync triggers the 
block compaction process. Block compaction replaces a sequence of blocks with 
one block. From all the sequences with total length less than 4M, compaction 
chooses the longest one. It is a greedy algorithm that preserve all potential 
candidates for the next round. Block Compaction for WASB increases data 
durability and allows using block blobs instead of page blobs. By default, 
block compaction is disabled. Similar to the configuration for page blobs, the 
client needs to specify HDFS folders where block compaction over block blobs is 
enabled. 

Results for HADOOP-14520-01.patch
Tests run: 704, Failures: 0, Errors: 0, Skipped: 119




> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-02.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> tested endpoint: fs.azure.account.key.hdfs4.blob.core.windows.net
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Description: 
Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
call. When the number of blocks is above 32000, next hflush/hsync triggers the 
block compaction process. Block compaction replaces a sequence of blocks with 
one block. From all the sequences with total length less than 4M, compaction 
chooses the longest one. It is a greedy algorithm that preserve all potential 
candidates for the next round. Block Compaction for WASB increases data 
durability and allows using block blobs instead of page blobs. By default, 
block compaction is disabled. Similar to the configuration for page blobs, the 
client needs to specify HDFS folders where block compaction over block blobs is 
enabled. 

Results for HADOOP-14520-01.patch
Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



  was:
Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
call. When the number of blocks is above a predefined, configurable value, next 
hflush/hsync triggers the block compaction process. Block compaction replaces a 
sequence of blocks with one block. From all the sequences with total length 
less than 4M, compaction chooses the longest one. It is a greedy algorithm that 
preserve all potential candidates for the next round. Block Compaction for WASB 
increases data durability and allows using block blobs instead of page blobs. 
By default, block compaction is disabled. Similar to the configuration for page 
blobs, the client needs to specify HDFS folders where block compaction over 
block blobs is enabled. 

Results for HADOOP-14520-01.patch
Tests run: 704, Failures: 0, Errors: 0, Skipped: 119




> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-02.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above 32000, next hflush/hsync triggers 
> the block compaction process. Block compaction replaces a sequence of blocks 
> with one block. From all the sequences with total length less than 4M, 
> compaction chooses the longest one. It is a greedy algorithm that preserve 
> all potential candidates for the next round. Block Compaction for WASB 
> increases data durability and allows using block blobs instead of page blobs. 
> By default, block compaction is disabled. Similar to the configuration for 
> page blobs, the client needs to specify HDFS folders where block compaction 
> over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058397#comment-16058397
 ] 

Sean Mackrory commented on HADOOP-14499:


That's from my comment on Jun 8th. Since we currently don't track the last time 
at which a directory listing was modified and since it inherently impacts 
entries other than itself, there's no way for an implementation to safely prune 
directories without raising the concerns you had about doing it here, right?

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Release Note: Block Compaction for WASB. When the number of blocks in a 
block blob is above 32000, compaction replaces longest sequence of blocks with 
total size length less then 4M, with just one block. Compaction allows blocks 
blobs to be used instead of page blobs, including for WAL files.  (was: Block 
Compaction for WASB. When the number of blocks in a block blob is above a 
predefined, configurable number, compaction replaces longest sequence of blocks 
with total length less then 4M, with just one block. Compaction allows blocks 
blobs to be used instead of page blobs, including for WAL files.)
  Status: Patch Available  (was: In Progress)

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-02.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above a predefined, configurable value, 
> next hflush/hsync triggers the block compaction process. Block compaction 
> replaces a sequence of blocks with one block. From all the sequences with 
> total length less than 4M, compaction chooses the longest one. It is a greedy 
> algorithm that preserve all potential candidates for the next round. Block 
> Compaction for WASB increases data durability and allows using block blobs 
> instead of page blobs. By default, block compaction is disabled. Similar to 
> the configuration for page blobs, the client needs to specify HDFS folders 
> where block compaction over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14520) Block compaction for WASB

2017-06-21 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14520:
-
Attachment: HADOOP-14520-02.patch

> Block compaction for WASB
> -
>
> Key: HADOOP-14520
> URL: https://issues.apache.org/jira/browse/HADOOP-14520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Assignee: Georgi Chalakov
> Attachments: HADOOP-14520-01.patch, HADOOP-14520-01-test.txt, 
> HADOOP-14520-02.patch
>
>
> Block Compaction for WASB allows uploading new blocks for every hflush/hsync 
> call. When the number of blocks is above a predefined, configurable value, 
> next hflush/hsync triggers the block compaction process. Block compaction 
> replaces a sequence of blocks with one block. From all the sequences with 
> total length less than 4M, compaction chooses the longest one. It is a greedy 
> algorithm that preserve all potential candidates for the next round. Block 
> Compaction for WASB increases data durability and allows using block blobs 
> instead of page blobs. By default, block compaction is disabled. Similar to 
> the configuration for page blobs, the client needs to specify HDFS folders 
> where block compaction over block blobs is enabled. 
> Results for HADOOP-14520-01.patch
> Tests run: 704, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14527) ITestS3GuardListConsistency is too slow

2017-06-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058387#comment-16058387
 ] 

Sean Mackrory commented on HADOOP-14527:


+1 in general - although I'd prefer to remove the Stopwatch instrumentation 
from the final patch.

> ITestS3GuardListConsistency is too slow
> ---
>
> Key: HADOOP-14527
> URL: https://issues.apache.org/jira/browse/HADOOP-14527
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-14527-HADOOP-13345.001.patch, 
> HADOOP-14527-HADOOP-13345.002.patch, HADOOP-14527-HADOOP-13345.003.patch
>
>
> I'm really glad to see folks adopting the inconsistency injection stuff and 
> adding test cases to ITestS3GuardListConsistency.  That test class has become 
> very slow, however, due to {{Thread.sleep()}} calls that wait for the 
> inconsistency injection timers to expire, and nested loops that run numerous 
> permutations of the test cases.
> I will take a stab at speeding up this test class.  As is it takes about 8 
> minutes to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058374#comment-16058374
 ] 

Aaron Fabbri commented on HADOOP-14499:
---

Hey [~mackrorysd], thank you for the patch.  Can you elaborate on this part:

{noformat}
-   * Implementations MUST clear file metadata, and MAY clear directory metadata
-   * (s3a itself does not track modification time for directories).
+   * Implementations MUST clear file metadata, but MUST NOT clear directory
+   * metadata (s3a itself does not track modification time for directories).
{noformat}

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14548) S3Guard: issues running parallel tests w/ S3N

2017-06-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058365#comment-16058365
 ] 

Sean Mackrory commented on HADOOP-14548:


{code}Sean Mackrory any thoughts on #5?{code}

The SSEC tests aren't supposed to be running in parallel. They're not, right? 
I'd suggest 2 likely explanations are that (1) it's failing because of 
something left behind when one of the S3N tests fails and left behind an object 
that is sometimes ignored by s3a but sometimes not (has happened to me a lot) 
or (2) that test is expected to throw an exception somewhere that always get 
short-circuited by S3Guard when it's enabled (see HADOOP-14448 - though I'm not 
sure what's changed since I fixed some others).

> S3Guard: issues running parallel tests w/ S3N 
> --
>
> Key: HADOOP-14548
> URL: https://issues.apache.org/jira/browse/HADOOP-14548
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
>
> In general, running S3Guard and parallel tests with S3A and S3N contract 
> tests enabled is asking for trouble:  S3Guard code assumes there are not 
> other non-S3Guard clients modifying the bucket.
> Goal of this JIRA is to:
> - Discuss current failures running `mvn verify -Dparallel-tests -Ds3guard 
> -Ddynamo` with S3A and S3N contract tests configured.
> - Identify any failures here that are worth looking into.
> - Document (or enforce) that people should not do this, or should expect 
> failures if they do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14566) Add seek support for SFTP FileSystem

2017-06-21 Thread Azhagu Selvan SP (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Azhagu Selvan SP updated HADOOP-14566:
--
Status: Patch Available  (was: Open)

> Add seek support for SFTP FileSystem
> 
>
> Key: HADOOP-14566
> URL: https://issues.apache.org/jira/browse/HADOOP-14566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Azhagu Selvan SP
>Priority: Minor
> Attachments: HADOOP-14566.patch
>
>
> This patch adds seek() method implementation for SFTP FileSystem and a unit 
> test for the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14566) Add seek support for SFTP FileSystem

2017-06-21 Thread Azhagu Selvan SP (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Azhagu Selvan SP updated HADOOP-14566:
--
Attachment: HADOOP-14566.patch

> Add seek support for SFTP FileSystem
> 
>
> Key: HADOOP-14566
> URL: https://issues.apache.org/jira/browse/HADOOP-14566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Azhagu Selvan SP
>Priority: Minor
> Attachments: HADOOP-14566.patch
>
>
> This patch adds seek() method implementation for SFTP FileSystem and a unit 
> test for the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14566) Add seek support for SFTP FileSystem

2017-06-21 Thread Azhagu Selvan SP (JIRA)
Azhagu Selvan SP created HADOOP-14566:
-

 Summary: Add seek support for SFTP FileSystem
 Key: HADOOP-14566
 URL: https://issues.apache.org/jira/browse/HADOOP-14566
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Azhagu Selvan SP
Priority: Minor


This patch adds seek() method implementation for SFTP FileSystem and a unit 
test for the same



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058305#comment-16058305
 ] 

Hadoop QA commented on HADOOP-14495:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
29s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14495 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873920/HADOOP-14495.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0e3499005e86 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e806c6e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12589/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12589/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12589/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12589/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   

[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058274#comment-16058274
 ] 

Hadoop QA commented on HADOOP-14398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-14398 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873946/HADOOP-14398.00.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12590/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Patch Available  (was: Open)

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-031.patch

Patch 031

* in sync with latest HADOOP-13345 branch
* testing commit logic with the inconsistent s3a client wherever possible; 
helps validate the algorithms work. Test probes have changed in places to 
handle this, primarily by adding some sleeps when inconsistency is enabled.

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
   Labels: docuentation  (was: )
Affects Version/s: 3.0.0-alpha3
 Target Version/s: 3.0.0-alpha4
 Tags: doc
   Status: Patch Available  (was: Open)

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14398:
---
Attachment: HADOOP-14398.00.patch

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14398.00.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-06-21 Thread Ryan Waters (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Waters updated HADOOP-14565:
-
Description: 
This task is meant to add an Authorizer interface to be used by the ADLS driver 
in a similar way to the one used by WASB. The primary difference in 
functionality being that the implementation of this Authorizer will be provided 
by an external jar. This class will be specified through configuration using 
"adl.external.authorization.class". 

If this configuration is provided, an instance of the provided class will be 
created and all file system calls will be passed through the authorizer, 
allowing implementations to determine if the file path and access type (create, 
open, delete, etc.) being requested is valid. If the requested implementation 
class is not found or it fails to initialize, it will fail initialization of 
the ADL driver. If no configuration is provided, calls to the authorizer will 
be skipped and the driver will behave as it did previously.  

  was:
This task is meant to add an Authorizer interface to be used by the ADLS driver 
in a similar way to the one used by WASB. The primary difference in 
functionality being that the implementation of this Authorizer will be provided 
by an external jar. This class will be specified through configuration using 
"adl.external.authorization.class". 

If this configuration is provided, an instance of the provided class will be 
created and all file system calls will be passed through the authorizer, 
allowing implementations to determine if the file path and access type (create, 
open, delete, etc.) being requested is valid. If the requested implementation 
class is not found, it will fail initialization of the ADL driver. If no 
configuration is provided, calls to the authorizer will be skipped and the 
driver will behave as it did previously.  


> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Fix For: 2.9.0, 3.0.0-alpha4
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-06-21 Thread Ryan Waters (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Waters updated HADOOP-14565:
-
Description: 
This task is meant to add an Authorizer interface to be used by the ADLS driver 
in a similar way to the one used by WASB. The primary difference in 
functionality being that the implementation of this Authorizer will be provided 
by an external jar. This class will be specified through configuration using 
"adl.external.authorization.class". 

If this configuration is provided, an instance of the provided class will be 
created and all file system calls will be passed through the authorizer, 
allowing implementations to determine if the file path and access type (create, 
open, delete, etc.) being requested is valid. If the requested implementation 
class is not found, it will fail initialization of the ADL driver. If no 
configuration is provided, calls to the authorizer will be skipped and the 
driver will behave as it did previously.  

  was:
As highlighted in HADOOP-13863, current implementation of WASB does not support 
authorization to any File System operations. This jira is created to add 
authorization support for WASB. The current approach is to enforce 
authorization via an external REST service (One approach could be to use 
component like Ranger to enforce authorization).  The support for authorization 
would be hiding behind a configuration flag : "fs.azure.enable.authorization" 
and the remote service is expected to be provided via config : 
"fs.azure.remote.auth.service.url".

The remote service is expected to provide support for the following REST call:  
{URL}/CHECK_AUTHORIZATION```

 An example request:
{URL}/CHECK_AUTHORIZATION?wasb_absolute_path=_type=_token=




> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Fix For: 2.9.0, 3.0.0-alpha4
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found, it will fail initialization of the ADL 
> driver. If no configuration is provided, calls to the authorizer will be 
> skipped and the driver will behave as it did previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-06-21 Thread Ryan Waters (JIRA)
Ryan Waters created HADOOP-14565:


 Summary: Azure: Add Authorization support to ADLS
 Key: HADOOP-14565
 URL: https://issues.apache.org/jira/browse/HADOOP-14565
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 2.8.0
Reporter: Ryan Waters
Assignee: Sivaguru Sankaridurg
 Fix For: 2.9.0, 3.0.0-alpha4


As highlighted in HADOOP-13863, current implementation of WASB does not support 
authorization to any File System operations. This jira is created to add 
authorization support for WASB. The current approach is to enforce 
authorization via an external REST service (One approach could be to use 
component like Ranger to enforce authorization).  The support for authorization 
would be hiding behind a configuration flag : "fs.azure.enable.authorization" 
and the remote service is expected to be provided via config : 
"fs.azure.remote.auth.service.url".

The remote service is expected to provide support for the following REST call:  
{URL}/CHECK_AUTHORIZATION```

 An example request:
{URL}/CHECK_AUTHORIZATION?wasb_absolute_path=_type=_token=





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14558) RPC requests on a secure cluster are 10x slower due to expensive encryption and decryption

2017-06-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058151#comment-16058151
 ] 

Andrew Purtell commented on HADOOP-14558:
-

We did custom AES encryption on HBase RPC. It could be adapted to Hadoop RPC. 
Please see HBASE-16414

> RPC requests on a secure cluster are 10x slower due to expensive encryption 
> and decryption 
> ---
>
> Key: HADOOP-14558
> URL: https://issues.apache.org/jira/browse/HADOOP-14558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mostafa Mokhtar
>Priority: Critical
>  Labels: impala, metadata, rpc
>
> While running a performance tests for Impala comparing secure and un-secure 
> clusters I noticed that metadata loading operations are 10x slower on a 
> cluster with Kerberos+SSL enabled. 
> hadoop.rpc.protection is set to privacy
> Any recommendations on how this can be mitigated? 10x slowdown is a big hit 
> for metadata loading. 
> The majority of the slowdown is coming from the two threads below. 
> {code}
> Stack Trace   Sample CountPercentage(%)
> org.apache.hadoop.ipc.Client$Connection.run() 5,212   46.586
>org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse()   5,203   
> 46.505
>   java.io.DataInputStream.readInt()   5,039   45.039
>  java.io.BufferedInputStream.read()   5,038   45.03
> java.io.BufferedInputStream.fill()5,038   45.03
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(byte[], int, 
> int) 5,036   45.013
>   java.io.FilterInputStream.read(byte[], int, int)5,036   
> 45.013
>  
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(byte[], int, 
> int)   5,036   45.013
> 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket()
>5,035   45.004
>
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(byte[], int, int) 4,775   
> 42.68
>   sun.security.jgss.GSSContextImpl.unwrap(byte[], 
> int, int, MessageProp)  4,775   42.68
>  
> sun.security.jgss.krb5.Krb5Context.unwrap(byte[], int, int, MessageProp) 
> 4,768   42.617
> 
> sun.security.jgss.krb5.WrapToken.getData()4,714   42.134
>
> sun.security.jgss.krb5.WrapToken.getData(byte[], int)  4,714   42.134
>   
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(byte[], int) 4,714   
> 42.134
>  
> sun.security.jgss.krb5.CipherHelper.decryptData(WrapToken, byte[], int, int, 
> byte[], int)3,083   27.556
> 
> sun.security.jgss.krb5.CipherHelper.des3KdDecrypt(WrapToken, byte[], int, 
> int, byte[], int)   3,078   27.512
>
> sun.security.krb5.internal.crypto.Des3.decryptRaw(byte[], int, byte[], 
> byte[], int, int)   3,076   27.494
>   
> sun.security.krb5.internal.crypto.dk.DkCrypto.decryptRaw(byte[], int, byte[], 
> byte[], int, int) 3,076   27.494
> {code}
> And 
> {code}
> Stack Trace   Sample CountPercentage(%)
> java.lang.Thread.run()3,379   30.202
>java.util.concurrent.ThreadPoolExecutor$Worker.run()   3,379   30.202
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)  
>   3,379   30.202
>  java.util.concurrent.FutureTask.run()3,367   30.095
> java.util.concurrent.Executors$RunnableAdapter.call() 3,367   
> 30.095
>org.apache.hadoop.ipc.Client$Connection$3.run()3,367   
> 30.095
>   java.io.DataOutputStream.flush()3,367   30.095
>  java.io.BufferedOutputStream.flush() 3,367   30.095
> java.io.BufferedOutputStream.flushBuffer()3,367   
> 30.095
>
> org.apache.hadoop.security.SaslRpcClient$WrappedOutputStream.write(byte[], 
> int, int)   3,367   30.095
>   
> com.sun.security.sasl.gsskerb.GssKrb5Base.wrap(byte[], int, int)3,281 
>   29.326
>  
> sun.security.jgss.GSSContextImpl.wrap(byte[], int, int, MessageProp) 3,281   
> 29.326
> 
> sun.security.jgss.krb5.Krb5Context.wrap(byte[], int, int, MessageProp)
> 3,280   29.317
>
> sun.security.jgss.krb5.WrapToken.(Krb5Context, MessageProp, 

[jira] [Commented] (HADOOP-12739) Deadlock with OrcInputFormat split threads and Jets3t connections, since, NativeS3FileSystem does not release connections with seek()

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058143#comment-16058143
 ] 

Hadoop QA commented on HADOOP-12739:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 2 
new + 36 unchanged - 0 fixed = 38 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-12739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784120/HADOOP-12739.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d1c184f826a3 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e806c6e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12588/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12588/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12588/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12588/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Deadlock with OrcInputFormat split threads and Jets3t connections, since, 
> NativeS3FileSystem does not release connections with seek()
> -
>
> Key: HADOOP-12739
> URL: 

[jira] [Comment Edited] (HADOOP-14543) Should use getAversion() while setting the zkacl

2017-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058122#comment-16058122
 ] 

Arpit Agarwal edited comment on HADOOP-14543 at 6/21/17 8:15 PM:
-

[~brahmareddy], I'll have to spend some more time to understand this. To me the 
documentation is ambiguous, but yes your test seems to indicate it should be 
the ACL version.

I'll also see if I can locate a ZooKeeper expert who can answer this.


was (Author: arpitagarwal):
[~brahmareddy], I'll have to spend some more time to understand this. To me the 
documentation is ambiguous, but yes your test seems to indicate it should be 
the ACL version.

> Should use getAversion() while setting the zkacl
> 
>
> Key: HADOOP-14543
> URL: https://issues.apache.org/jira/browse/HADOOP-14543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14543.patch
>
>
> while setting the zkacl we used {color:red}{{getVersion()}}{color} which is 
> dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If 
> there is any acl changes( i.e relam change/..) ,we set the ACL with 
> dataversion which will cause {color:#d04437}BADVersion {color}and 
> {color:#d04437}*process will not start*{color}. See 
> [here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804]
> {{zkClient.setACL(path, zkAcl, stat.getVersion());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14543) Should use getAversion() while setting the zkacl

2017-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058122#comment-16058122
 ] 

Arpit Agarwal commented on HADOOP-14543:


[~brahmareddy], I'll have to spend some more time to understand this. To me the 
documentation is ambiguous, but yes your test seems to indicate it should be 
the ACL version.

> Should use getAversion() while setting the zkacl
> 
>
> Key: HADOOP-14543
> URL: https://issues.apache.org/jira/browse/HADOOP-14543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14543.patch
>
>
> while setting the zkacl we used {color:red}{{getVersion()}}{color} which is 
> dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If 
> there is any acl changes( i.e relam change/..) ,we set the ACL with 
> dataversion which will cause {color:#d04437}BADVersion {color}and 
> {color:#d04437}*process will not start*{color}. See 
> [here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804]
> {{zkClient.setACL(path, zkAcl, stat.getVersion());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12739) Deadlock with OrcInputFormat split threads and Jets3t connections, since, NativeS3FileSystem does not release connections with seek()

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058093#comment-16058093
 ] 

Steve Loughran commented on HADOOP-12739:
-

I'm really tempted to close this as a wontfix. just because we're trying to 
move everyone on to S3

S3A has a lot of performance updates for reading columnar data, where seek() 
performance is a key feature.

Can you upgrade to Hadoop 2.8?

> Deadlock with OrcInputFormat split threads and Jets3t connections, since, 
> NativeS3FileSystem does not release connections with seek()
> -
>
> Key: HADOOP-12739
> URL: https://issues.apache.org/jira/browse/HADOOP-12739
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Pavan Srinivas
>Assignee: Pavan Srinivas
> Attachments: 11600.txt, HADOOP-12739.patch
>
>
> Recently, we came across a deadlock situation with OrcInputFormat while 
> computing splits. 
> - In Orc, for split computation, it needs file listing and file sizes. 
> - Multiple threads are invoked for listing the files and if the data is 
> located in S3, NativeS3FileSystem is used. 
> - NativeS3FileSystem in turn uses JetS3t Lib to talk to AWS and maintain 
> connection pool. 
> - When # of threads from OrcInputFormat exceeds JetS3t's max # of 
> connections, a deadlock occurs. stack trace: 
> {code}
> "ORC_GET_SPLITS #5" daemon prio=10 tid=0x7f8568108800 nid=0x1e29 in 
> Object.wait() [0x7f8565696000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0xdf9ed450> (a 
> org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
>   at 
> org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.doGetConnection(MultiThreadedHttpConnectionManager.java:518)
>   - locked <0xdf9ed450> (a 
> org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$ConnectionPool)
>   at 
> org.apache.commons.httpclient.MultiThreadedHttpConnectionManager.getConnectionWithTimeout(MultiThreadedHttpConnectionManager.java:416)
>   at 
> org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:153)
>   at 
> org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
>   at 
> org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
>   at 
> org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:370)
>   at 
> org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestGet(RestStorageService.java:929)
>   at 
> org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2007)
>   at 
> org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:1944)
>   at org.jets3t.service.S3Service.getObject(S3Service.java:2625)
>   at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:254)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at org.apache.hadoop.fs.s3native.$Proxy12.retrieve(Unknown Source)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.reopen(NativeS3FileSystem.java:269)
>   - locked <0xdb01eec0> (a 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream)
>   at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:258)
>   - locked <0xdb01eec0> (a 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream)
>   at 
> org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:98)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:63)
>   - locked <0xdb01ee70> (a org.apache.hadoop.fs.FSDataInputStream)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:329)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:292)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:197)
>   at 
> 

[jira] [Updated] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-21 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14495:
---
Attachment: HADOOP-14495.01.patch

Thanks a lot for the suggestions, [~steve_l]

bq. we can/should replace HadoopIllegalArgumentException with the base 
{{IllegalArguentException, for better Preconditions checks.

Are you suggesting to throw {{IllgealArgumentException}} in 
{{FSDataOutputStreamBuilder#builder()}}?   Ok, I changed to it.

Also addressed your other comments in the latest patch. 

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch, HADOOP-14495.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058021#comment-16058021
 ] 

Steve Loughran commented on HADOOP-14499:
-

My IDE says that {{ancestors}} is the only unused field too.

+1 for the patch, let's see what yetus says *after*

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14542) Add IOUtils.cleanup or something that accepts slf4j logger API

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058005#comment-16058005
 ] 

Hadoop QA commented on HADOOP-14542:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14542 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873906/HADOOP-14542.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4c8153471b91 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e806c6e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12587/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12587/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12587/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12587/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add IOUtils.cleanup or something that accepts slf4j logger API
> --
>
> Key: HADOOP-14542
> URL: https://issues.apache.org/jira/browse/HADOOP-14542
> Project: Hadoop Common
>  Issue Type: Sub-task
>

[jira] [Commented] (HADOOP-5732) Add SFTP FileSystem

2017-06-21 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057885#comment-16057885
 ] 

Inigo Goiri commented on HADOOP-5732:
-

[~bahchis], this is not implemented but I found somebody that had implemented a 
workaround:
https://github.com/ind9/hadoop-fs-sftp/commit/9c5805d45d8bf2eb7823db3680195972239ef39d
Even with unit test:
https://github.com/ind9/hadoop-fs-sftp/commit/7315fda25db81c2be2a7dfb5ffa1d30d22760bec
Actually the author of those have a couple fixes that we should add here.
It would be great if he could create a JIRA with those improvements but not 
sure he is around here.
I'll try to contact him.

> Add SFTP FileSystem
> ---
>
> Key: HADOOP-5732
> URL: https://issues.apache.org/jira/browse/HADOOP-5732
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
> Environment: Any environment
>Reporter: Íñigo Goiri
>Assignee: ramtin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-5732.008.patch, HADOOP-5732.009.patch, 
> HADOOP-5732.010.patch, HADOOP-5732.patch, HADOOP-5732.patch, 
> HADOOP-5732.patch, HADOOP-5732.patch, HADOOP-5732.patch, 
> ivy-for-hadoop-7532.patch, ivy-for-hadoop-7532.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> I have implemented a FileSystem that supports SFTP. It uses JSch 
> (http://www.jcraft.com/jsch/) in order to manage SFTP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14542) Add IOUtils.cleanup or something that accepts slf4j logger API

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14542:

Attachment: HADOOP-14542.003.patch

> Add IOUtils.cleanup or something that accepts slf4j logger API
> --
>
> Key: HADOOP-14542
> URL: https://issues.apache.org/jira/browse/HADOOP-14542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
> Attachments: HADOOP-14542.001.patch, HADOOP-14542.002.patch, 
> HADOOP-14542.003.patch
>
>
> Split from HADOOP-14539.
> Now IOUtils.cleanup only accepts commons-logging logger API. Now we are 
> migrating the APIs to slf4j, slf4j logger API should be accepted as well. 
> Adding {{IOUtils.cleanup(Logger, Closeable...)}} causes 
> {{IOUtils.cleanup(null, Closeable...)}} to fail (incompatible change), so 
> it's better to change the method name to avoid the conflict.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14542) Add IOUtils.cleanup or something that accepts slf4j logger API

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14542:

Attachment: (was: HDFS-14542.003.patch)

> Add IOUtils.cleanup or something that accepts slf4j logger API
> --
>
> Key: HADOOP-14542
> URL: https://issues.apache.org/jira/browse/HADOOP-14542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
> Attachments: HADOOP-14542.001.patch, HADOOP-14542.002.patch
>
>
> Split from HADOOP-14539.
> Now IOUtils.cleanup only accepts commons-logging logger API. Now we are 
> migrating the APIs to slf4j, slf4j logger API should be accepted as well. 
> Adding {{IOUtils.cleanup(Logger, Closeable...)}} causes 
> {{IOUtils.cleanup(null, Closeable...)}} to fail (incompatible change), so 
> it's better to change the method name to avoid the conflict.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14542) Add IOUtils.cleanup or something that accepts slf4j logger API

2017-06-21 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-14542:

Attachment: HDFS-14542.003.patch

Thanks for the catch [~ajisakaa]! post v003 patch.

> Add IOUtils.cleanup or something that accepts slf4j logger API
> --
>
> Key: HADOOP-14542
> URL: https://issues.apache.org/jira/browse/HADOOP-14542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
> Attachments: HADOOP-14542.001.patch, HADOOP-14542.002.patch
>
>
> Split from HADOOP-14539.
> Now IOUtils.cleanup only accepts commons-logging logger API. Now we are 
> migrating the APIs to slf4j, slf4j logger API should be accepted as well. 
> Adding {{IOUtils.cleanup(Logger, Closeable...)}} causes 
> {{IOUtils.cleanup(null, Closeable...)}} to fail (incompatible change), so 
> it's better to change the method name to avoid the conflict.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ

2017-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057859#comment-16057859
 ] 

Hudson commented on HADOOP-14146:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11902 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11902/])
HADOOP-14146.  KerberosAuthenticationHandler should authenticate with (daryn: 
rev e806c6e0ce6026d53227b51d58ec6d5458164571)
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestMultiSchemeAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java


> KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
> 
>
> Key: HADOOP-14146
> URL: https://issues.apache.org/jira/browse/HADOOP-14146
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14146.1.patch, HADOOP-14146.2.patch, 
> HADOOP-14146.3.patch, HADOOP-14146.patch
>
>
> Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add 
> multiple SPN host and/or realm support to spnego authentication.  The basic 
> problem is the server tries to guess and/or brute force what SPN the client 
> used.  The server should just decode the SPN from the AP-REQ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14031) Reduce fair call queue performance impact

2017-06-21 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp resolved HADOOP-14031.
--
Resolution: Fixed

All subtasks are complete.

> Reduce fair call queue performance impact
> -
>
> Key: HADOOP-14031
> URL: https://issues.apache.org/jira/browse/HADOOP-14031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> The fair call queue has performance deficits that create an illusion of 
> reasonable performance under heavy load.  However, there is excessive lock 
> contention, priority inversion, and pushback/reconnect issues that combine to 
> create an artificial rate-limiting on the ingress.
> The result is server metrics look good, call queue looks low, yet clients 
> experience dismal latencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ

2017-06-21 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14146:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch 2 & 2.8.

> KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
> 
>
> Key: HADOOP-14146
> URL: https://issues.apache.org/jira/browse/HADOOP-14146
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14146.1.patch, HADOOP-14146.2.patch, 
> HADOOP-14146.3.patch, HADOOP-14146.patch
>
>
> Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add 
> multiple SPN host and/or realm support to spnego authentication.  The basic 
> problem is the server tries to guess and/or brute force what SPN the client 
> used.  The server should just decode the SPN from the AP-REQ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14533) Size of args cannot be less than zero in TraceAdmin#run as its linkedlist

2017-06-21 Thread Weisen Han (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057818#comment-16057818
 ] 

Weisen Han commented on HADOOP-14533:
-

thanks to Wei-Chiu Chuang and Chen Liang for your review. Thanks to Brahma 
Reddy Battula for your review and commit.


> Size of args cannot be less than zero in TraceAdmin#run as its linkedlist
> -
>
> Key: HADOOP-14533
> URL: https://issues.apache.org/jira/browse/HADOOP-14533
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tracing
>Affects Versions: 2.6.0
>Reporter: Weisen Han
>Assignee: Weisen Han
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14533-001.patch, HADOOP-14533-002.patch
>
>
> {code}
>   @Override
>   public int run(String argv[]) throws Exception {
>   LinkedList args = new LinkedList();
>   ……
>  if (args.size() < 0) {
> System.err.println("You must specify an operation.");
>  return 1;
> }
> ……
> }
> {code}
> From the code above, the {{args}}  is a linklist obejct, so it cannot be less 
> than zero.meaning that code below is wrong 
> {code}
>  if (args.size() < 0) {
>   System.err.println("You must specify an operation.");
>   return 1;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057781#comment-16057781
 ] 

Hadoop QA commented on HADOOP-14559:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 56s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14559 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873888/HADOOP-14559-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6e2968749d6 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5db3f98 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12585/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12585/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12585/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12585/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: 

[jira] [Commented] (HADOOP-12940) Fix warnings from Spotbugs in hadoop-common

2017-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057779#comment-16057779
 ] 

Hadoop QA commented on HADOOP-12940:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-12940 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12940 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864421/HADOOP-12940.02.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12586/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix warnings from Spotbugs in hadoop-common
> ---
>
> Key: HADOOP-12940
> URL: https://issues.apache.org/jira/browse/HADOOP-12940
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-12940.01.patch, HADOOP-12940.02.patch, 
> hadoop-common.20160715.html, hadoop-common.html
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14558) RPC requests on a secure cluster are 10x slower due to expensive encryption and decryption

2017-06-21 Thread Mostafa Mokhtar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057758#comment-16057758
 ] 

Mostafa Mokhtar commented on HADOOP-14558:
--

[~ste...@apache.org] [~daryn]

Thank you for the analysis and comments. 
For applications with fast moving reloading block information for new data 
quickly becomes a bottleneck puts significantly more pressure on the NN. 

> RPC requests on a secure cluster are 10x slower due to expensive encryption 
> and decryption 
> ---
>
> Key: HADOOP-14558
> URL: https://issues.apache.org/jira/browse/HADOOP-14558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mostafa Mokhtar
>Priority: Critical
>  Labels: impala, metadata, rpc
>
> While running a performance tests for Impala comparing secure and un-secure 
> clusters I noticed that metadata loading operations are 10x slower on a 
> cluster with Kerberos+SSL enabled. 
> hadoop.rpc.protection is set to privacy
> Any recommendations on how this can be mitigated? 10x slowdown is a big hit 
> for metadata loading. 
> The majority of the slowdown is coming from the two threads below. 
> {code}
> Stack Trace   Sample CountPercentage(%)
> org.apache.hadoop.ipc.Client$Connection.run() 5,212   46.586
>org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse()   5,203   
> 46.505
>   java.io.DataInputStream.readInt()   5,039   45.039
>  java.io.BufferedInputStream.read()   5,038   45.03
> java.io.BufferedInputStream.fill()5,038   45.03
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(byte[], int, 
> int) 5,036   45.013
>   java.io.FilterInputStream.read(byte[], int, int)5,036   
> 45.013
>  
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(byte[], int, 
> int)   5,036   45.013
> 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket()
>5,035   45.004
>
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(byte[], int, int) 4,775   
> 42.68
>   sun.security.jgss.GSSContextImpl.unwrap(byte[], 
> int, int, MessageProp)  4,775   42.68
>  
> sun.security.jgss.krb5.Krb5Context.unwrap(byte[], int, int, MessageProp) 
> 4,768   42.617
> 
> sun.security.jgss.krb5.WrapToken.getData()4,714   42.134
>
> sun.security.jgss.krb5.WrapToken.getData(byte[], int)  4,714   42.134
>   
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(byte[], int) 4,714   
> 42.134
>  
> sun.security.jgss.krb5.CipherHelper.decryptData(WrapToken, byte[], int, int, 
> byte[], int)3,083   27.556
> 
> sun.security.jgss.krb5.CipherHelper.des3KdDecrypt(WrapToken, byte[], int, 
> int, byte[], int)   3,078   27.512
>
> sun.security.krb5.internal.crypto.Des3.decryptRaw(byte[], int, byte[], 
> byte[], int, int)   3,076   27.494
>   
> sun.security.krb5.internal.crypto.dk.DkCrypto.decryptRaw(byte[], int, byte[], 
> byte[], int, int) 3,076   27.494
> {code}
> And 
> {code}
> Stack Trace   Sample CountPercentage(%)
> java.lang.Thread.run()3,379   30.202
>java.util.concurrent.ThreadPoolExecutor$Worker.run()   3,379   30.202
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)  
>   3,379   30.202
>  java.util.concurrent.FutureTask.run()3,367   30.095
> java.util.concurrent.Executors$RunnableAdapter.call() 3,367   
> 30.095
>org.apache.hadoop.ipc.Client$Connection$3.run()3,367   
> 30.095
>   java.io.DataOutputStream.flush()3,367   30.095
>  java.io.BufferedOutputStream.flush() 3,367   30.095
> java.io.BufferedOutputStream.flushBuffer()3,367   
> 30.095
>
> org.apache.hadoop.security.SaslRpcClient$WrappedOutputStream.write(byte[], 
> int, int)   3,367   30.095
>   
> com.sun.security.sasl.gsskerb.GssKrb5Base.wrap(byte[], int, int)3,281 
>   29.326
>  
> sun.security.jgss.GSSContextImpl.wrap(byte[], int, int, MessageProp) 3,281   
> 29.326
> 
> sun.security.jgss.krb5.Krb5Context.wrap(byte[], int, int, MessageProp)

[jira] [Commented] (HADOOP-14563) LoadBalancingKMSClientProvider#warmUpEncryptedKeys swallows IOException

2017-06-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057752#comment-16057752
 ] 

Wei-Chiu Chuang commented on HADOOP-14563:
--

Sounds like LBKMSCP should throw an exception when all providers throw 
exception?

> LoadBalancingKMSClientProvider#warmUpEncryptedKeys swallows IOException
> ---
>
> Key: HADOOP-14563
> URL: https://issues.apache.org/jira/browse/HADOOP-14563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> TestAclsEndToEnd is failing consistently in HADOOP-14521.
> The reason behind it is LoadBalancingKMSClientProvider#warmUpEncryptedKeys 
> swallows IOException while KMSClientProvider#warmUpEncryptedKeys throws all 
> the way back to createEncryptionZone and creation of EZ fails.
> Following are the relevant piece of code snippets.
>  {code:title=KMSClientProvider.java|borderStyle=solid}
>   @Override
>   public void warmUpEncryptedKeys(String... keyNames)
>   throws IOException {
> try {
>   encKeyVersionQueue.initializeQueuesForKeys(keyNames);
> } catch (ExecutionException e) {
>   throw new IOException(e);
> }
>   }
> {code}
>  {code:title=LoadBalancingKMSClientProvider.java|borderStyle=solid}
>// This request is sent to all providers in the load-balancing group
>   @Override
>   public void warmUpEncryptedKeys(String... keyNames) throws IOException {
> for (KMSClientProvider provider : providers) {
>   try {
> provider.warmUpEncryptedKeys(keyNames);
>   } catch (IOException ioe) {
> LOG.error(
> "Error warming up keys for provider with url"
> + "[" + provider.getKMSUrl() + "]", ioe);
>   }
> }
>   }
> {code}
> In HADOOP-14521, I intend to always instantiate 
> LoadBalancingKMSClientProvider even if there is only one provider so that the 
> retries can applied at only one place.
> We need to decide whether we want to fail in both the case or continue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057746#comment-16057746
 ] 

Steve Loughran edited comment on HADOOP-14564 at 6/21/17 3:59 PM:
--

which gives you stacks more like
{code}
testSeekBigFile(org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek)  Time 
elapsed: 66.828 sec  <<< ERROR!
java.io.InterruptedIOException: put on fork-0008/test/bigseekfile.txt: 
com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:142)
at org.apache.hadoop.fs.s3a.AwsLambda.execute(AwsLambda.java:45)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:441)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:421)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
Read timed out
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1043)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132)
at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1712)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1362)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:443)
at org.apache.hadoop.fs.s3a.AwsLambda.execute(AwsLambda.java:43)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:441)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:421)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 

[jira] [Commented] (HADOOP-14146) KerberosAuthenticationHandler should authenticate with SPN in AP-REQ

2017-06-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057750#comment-16057750
 ] 

Daryn Sharp commented on HADOOP-14146:
--

I don't think there's much worry of a completely incompatible change occurring. 
:)  ASN.1 was defined in 1984.  The Kerberos v5 DER was defined in 1993.

Thanks!

> KerberosAuthenticationHandler should authenticate with SPN in AP-REQ
> 
>
> Key: HADOOP-14146
> URL: https://issues.apache.org/jira/browse/HADOOP-14146
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14146.1.patch, HADOOP-14146.2.patch, 
> HADOOP-14146.3.patch, HADOOP-14146.patch
>
>
> Many attempts (HADOOP-10158, HADOOP-11628, HADOOP-13565) have tried to add 
> multiple SPN host and/or realm support to spnego authentication.  The basic 
> problem is the server tries to guess and/or brute force what SPN the client 
> used.  The server should just decode the SPN from the AP-REQ.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057746#comment-16057746
 ] 

Steve Loughran commented on HADOOP-14564:
-

which gives you stacks more like
{code}
testSeekBigFile(org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek)  Time 
elapsed: 66.828 sec  <<< ERROR!
java.io.InterruptedIOException: put on fork-0008/test/bigseekfile.txt: 
com.amazonaws.SdkClientException: Unable to execute HTTP request: Read timed out
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:142)
at org.apache.hadoop.fs.s3a.AwsLambda.execute(AwsLambda.java:45)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:441)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:421)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: 
Read timed out
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1043)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132)
at 
com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1712)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:1362)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$putObject$3(WriteOperationHelper.java:443)
at org.apache.hadoop.fs.s3a.AwsLambda.execute(AwsLambda.java:43)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:441)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:428)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream$1.call(S3ABlockOutputStream.java:421)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at 
org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 

[jira] [Commented] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057738#comment-16057738
 ] 

Steve Loughran commented on HADOOP-14564:
-

You also need to cut back on retries, as socket timeouts, by default, just 
trigger new attempts. Probably good for production, not for dev.
{code}
  
fs.s3a.connection.establish.timeout
5000
  
  
fs.s3a.connection.socket.timeout
${fs.s3a.connection.establish.timeout}
  
  
fs.s3a.attempts.maximum
2
  
{code}

> s3a test can hang in teardown with network problems
> ---
>
> Key: HADOOP-14564
> URL: https://issues.apache.org/jira/browse/HADOOP-14564
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you've a transient network test and things fail, then the directory 
> cleanup in teardown can block a test so that it times out entirely. 
> Proposed: shorten socket timeouts for s3 connections, assuming this is 
> possible (stack trace implies it was in a read(), not a connect()).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14551) F3A init hangs if you try to connect while the system is offline

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057728#comment-16057728
 ] 

Steve Loughran commented on HADOOP-14551:
-

Part of the problem appears to be that the standard AWS retry policy in 
{{SDKDefaultRetryCondition}} says "retry all IOEs"; we should have a list of 
those which you can't (UnknownHost, NoRouteToHost, other?)


> F3A init hangs if you try to connect while the system is offline
> 
>
> Key: HADOOP-14551
> URL: https://issues.apache.org/jira/browse/HADOOP-14551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> F3A init hangs if you try to connect while the system is offline (that is: 
> the nost of the s3 endpoint is unknown)
> Assumption: unknown host exception is considered recoverable & client is 
> spinning for a long time waiting for it.
> I think we can conclude that unknown host is unrecoverable: if DNS is in 
> trouble, you are doomed.
> Proposed: quick lookup of endpoint addr, fail with our wiki diagnostics error 
> on any problem.
> I don't see any cost in doing this, as it will guarantee that the endpoint is 
> cached in the JVM ready for the AWS client connection. If it can't be found, 
> we'll fail within 20+s with something meaningful.
> Noticed during a test run: laptop wifi off; all NICs other than loopback are 
> inactive.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14499) Findbugs warning in LocalMetadataStore

2017-06-21 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057721#comment-16057721
 ] 

Sean Mackrory commented on HADOOP-14499:


I'm really quite mystified by that findbugs continuing to show up. The patch 
passed Yetus locally and clearly removes that variable at the line mentioned...

> Findbugs warning in LocalMetadataStore
> --
>
> Key: HADOOP-14499
> URL: https://issues.apache.org/jira/browse/HADOOP-14499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14499-HADOOP-13345.001.patch, 
> HADOOP-14499-HADOOP-13345.002.patch
>
>
> First saw this raised by Yetus on HADOOP-14433:
> {code}
> Bug type UC_USELESS_OBJECT (click for details)
> In class org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
> In method org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.prune(long)
> Value ancestors
> Type java.util.LinkedList
> At LocalMetadataStore.java:[line 300]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057704#comment-16057704
 ] 

Steve Loughran commented on HADOOP-14564:
-

Main issue is that the socket timeout, 200s, is > than the test timeout of 
180s; any hang will fail the test at the JUnit level, rather than have the 
failures raise in the test run & have it handle it.

Short-term workaround: set the values to be shorter.

{code}
  
fs.s3a.connection.establish.timeout
15000
  
  
fs.s3a.connection.socket.timeout
${fs.s3a.connection.establish.timeout}
  
{code}

> s3a test can hang in teardown with network problems
> ---
>
> Key: HADOOP-14564
> URL: https://issues.apache.org/jira/browse/HADOOP-14564
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you've a transient network test and things fail, then the directory 
> cleanup in teardown can block a test so that it times out entirely. 
> Proposed: shorten socket timeouts for s3 connections, assuming this is 
> possible (stack trace implies it was in a read(), not a connect()).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14564:

Priority: Minor  (was: Major)

> s3a test can hang in teardown with network problems
> ---
>
> Key: HADOOP-14564
> URL: https://issues.apache.org/jira/browse/HADOOP-14564
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you've a transient network test and things fail, then the directory 
> cleanup in teardown can block a test so that it times out entirely. 
> Proposed: shorten socket timeouts for s3 connections, assuming this is 
> possible (stack trace implies it was in a read(), not a connect()).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057699#comment-16057699
 ] 

Steve Loughran commented on HADOOP-14564:
-

Against s3a ireland via a wifi link to a base station connected to the main 
household base station via ether-over-power, which can be a bit unreliable at 
times...

{code}
Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 292.527 sec 
<<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus
testComplexDirActions(org.apache.hadoop.fs.contract.s3a.ITestS3AContractGetFileStatus)
  Time elapsed: 180.013 sec  <<< ERROR!
java.lang.Exception: test timed out after 18 milliseconds
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
com.amazonaws.thirdparty.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
com.amazonaws.thirdparty.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
com.amazonaws.thirdparty.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:82)
at 
com.amazonaws.thirdparty.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at 
com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1186)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1245)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1134)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2019)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:1979)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1519)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.rm(ContractTestUtils.java:367)
   

[jira] [Created] (HADOOP-14564) s3a test can hang in teardown with network problems

2017-06-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14564:
---

 Summary: s3a test can hang in teardown with network problems
 Key: HADOOP-14564
 URL: https://issues.apache.org/jira/browse/HADOOP-14564
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 2.8.1
Reporter: Steve Loughran


If you've a transient network test and things fail, then the directory cleanup 
in teardown can block a test so that it times out entirely. 

Proposed: shorten socket timeouts for s3 connections, assuming this is possible 
(stack trace implies it was in a read(), not a connect()).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057399#comment-16057399
 ] 

Hongyuan Li edited comment on HADOOP-14559 at 6/21/17 3:12 PM:
---

Thanks for your advice. [~ste...@apache.org]. Will do it soon.

*update*

submit the patch as suggested by [~ste...@apache.org]


was (Author: hongyuan li):
Thanks for your advice. [~ste...@apache.org]. Will do it soon.

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Status: Open  (was: Patch Available)

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Attachment: (was: HADOOP-14559-001.patch)

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Status: Patch Available  (was: Open)

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Status: Patch Available  (was: Open)

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Attachment: HADOOP-14559-001.patch

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2017-06-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057643#comment-16057643
 ] 

Daryn Sharp commented on HADOOP-10768:
--

Ok.  The patch does appear to encrypt at the packet level, this is good.  
Preliminary comments:
# The cipher options appears to be present in every packet.  If so, it should 
only be in the negotiate/initiate messages.
# Should use a custom sasl client/server that delegates to the actual sasl 
instance.  The ipc layer changes would be minimal and easier to maintain.
# Why not use javax cipher libraries?  Any number of ciphers could be used now 
and in the future w/o code change.  The aes ciphers are supposed to use aes-ni 
intrinsics when available.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dian Fu
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, Optimize 
> Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14559:
-
Attachment: HADOOP-14559-001.patch

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14563) LoadBalancingKMSClientProvider#warmUpEncryptedKeys swallows IOException

2017-06-21 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HADOOP-14563:
---

 Summary: LoadBalancingKMSClientProvider#warmUpEncryptedKeys 
swallows IOException
 Key: HADOOP-14563
 URL: https://issues.apache.org/jira/browse/HADOOP-14563
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.1
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


TestAclsEndToEnd is failing consistently in HADOOP-14521.
The reason behind it is LoadBalancingKMSClientProvider#warmUpEncryptedKeys 
swallows IOException while KMSClientProvider#warmUpEncryptedKeys throws all the 
way back to createEncryptionZone and creation of EZ fails.
Following are the relevant piece of code snippets.
 {code:title=KMSClientProvider.java|borderStyle=solid}
  @Override
  public void warmUpEncryptedKeys(String... keyNames)
  throws IOException {
try {
  encKeyVersionQueue.initializeQueuesForKeys(keyNames);
} catch (ExecutionException e) {
  throw new IOException(e);
}
  }
{code}

 {code:title=LoadBalancingKMSClientProvider.java|borderStyle=solid}
   // This request is sent to all providers in the load-balancing group
  @Override
  public void warmUpEncryptedKeys(String... keyNames) throws IOException {
for (KMSClientProvider provider : providers) {
  try {
provider.warmUpEncryptedKeys(keyNames);
  } catch (IOException ioe) {
LOG.error(
"Error warming up keys for provider with url"
+ "[" + provider.getKMSUrl() + "]", ioe);
  }
}
  }
{code}
In HADOOP-14521, I intend to always instantiate LoadBalancingKMSClientProvider 
even if there is only one provider so that the retries can applied at only one 
place.
We need to decide whether we want to fail in both the case or continue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2017-06-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057543#comment-16057543
 ] 

Daryn Sharp commented on HADOOP-10768:
--

I specifically ensured the rpcv9 protocol (very early 2.x releases) is designed 
to support a rpc proxy to reduce connections for instance to the NN.  Ex. Every 
rpc packet is framed so a proxy can mux/demux the packets to clients even if 
encryption is used.  I know the sasl wrap/unwrap path is expensive but haven't 
had the cycles to improve it.

Adding encryption to the entire stream will negate the proxy capability which 
is something I think will soon be needed with very large clusters.  *-1* if 
that's what this patch does.  I'll review shortly.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dian Fu
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, Optimize 
> Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14558) RPC requests on a secure cluster are 10x slower due to expensive encryption and decryption

2017-06-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057529#comment-16057529
 ] 

Daryn Sharp commented on HADOOP-14558:
--

I've optimized the common code path for non-privacy.  It's true that the 
current impl creates too many copies of requests.  I've been meaning to fix 
that but it's been a very low prio.  I'll review the linked jira.

> RPC requests on a secure cluster are 10x slower due to expensive encryption 
> and decryption 
> ---
>
> Key: HADOOP-14558
> URL: https://issues.apache.org/jira/browse/HADOOP-14558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mostafa Mokhtar
>Priority: Critical
>  Labels: impala, metadata, rpc
>
> While running a performance tests for Impala comparing secure and un-secure 
> clusters I noticed that metadata loading operations are 10x slower on a 
> cluster with Kerberos+SSL enabled. 
> hadoop.rpc.protection is set to privacy
> Any recommendations on how this can be mitigated? 10x slowdown is a big hit 
> for metadata loading. 
> The majority of the slowdown is coming from the two threads below. 
> {code}
> Stack Trace   Sample CountPercentage(%)
> org.apache.hadoop.ipc.Client$Connection.run() 5,212   46.586
>org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse()   5,203   
> 46.505
>   java.io.DataInputStream.readInt()   5,039   45.039
>  java.io.BufferedInputStream.read()   5,038   45.03
> java.io.BufferedInputStream.fill()5,038   45.03
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(byte[], int, 
> int) 5,036   45.013
>   java.io.FilterInputStream.read(byte[], int, int)5,036   
> 45.013
>  
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(byte[], int, 
> int)   5,036   45.013
> 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket()
>5,035   45.004
>
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(byte[], int, int) 4,775   
> 42.68
>   sun.security.jgss.GSSContextImpl.unwrap(byte[], 
> int, int, MessageProp)  4,775   42.68
>  
> sun.security.jgss.krb5.Krb5Context.unwrap(byte[], int, int, MessageProp) 
> 4,768   42.617
> 
> sun.security.jgss.krb5.WrapToken.getData()4,714   42.134
>
> sun.security.jgss.krb5.WrapToken.getData(byte[], int)  4,714   42.134
>   
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(byte[], int) 4,714   
> 42.134
>  
> sun.security.jgss.krb5.CipherHelper.decryptData(WrapToken, byte[], int, int, 
> byte[], int)3,083   27.556
> 
> sun.security.jgss.krb5.CipherHelper.des3KdDecrypt(WrapToken, byte[], int, 
> int, byte[], int)   3,078   27.512
>
> sun.security.krb5.internal.crypto.Des3.decryptRaw(byte[], int, byte[], 
> byte[], int, int)   3,076   27.494
>   
> sun.security.krb5.internal.crypto.dk.DkCrypto.decryptRaw(byte[], int, byte[], 
> byte[], int, int) 3,076   27.494
> {code}
> And 
> {code}
> Stack Trace   Sample CountPercentage(%)
> java.lang.Thread.run()3,379   30.202
>java.util.concurrent.ThreadPoolExecutor$Worker.run()   3,379   30.202
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)  
>   3,379   30.202
>  java.util.concurrent.FutureTask.run()3,367   30.095
> java.util.concurrent.Executors$RunnableAdapter.call() 3,367   
> 30.095
>org.apache.hadoop.ipc.Client$Connection$3.run()3,367   
> 30.095
>   java.io.DataOutputStream.flush()3,367   30.095
>  java.io.BufferedOutputStream.flush() 3,367   30.095
> java.io.BufferedOutputStream.flushBuffer()3,367   
> 30.095
>
> org.apache.hadoop.security.SaslRpcClient$WrappedOutputStream.write(byte[], 
> int, int)   3,367   30.095
>   
> com.sun.security.sasl.gsskerb.GssKrb5Base.wrap(byte[], int, int)3,281 
>   29.326
>  
> sun.security.jgss.GSSContextImpl.wrap(byte[], int, int, MessageProp) 3,281   
> 29.326
> 
> sun.security.jgss.krb5.Krb5Context.wrap(byte[], int, int, MessageProp)
> 3,280   

[jira] [Commented] (HADOOP-14559) The used FTPFileSystem in TestFTPFileSystem should be closed in each test case.

2017-06-21 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057399#comment-16057399
 ] 

Hongyuan Li commented on HADOOP-14559:
--

Thanks for your advice. [~ste...@apache.org]. Will do it soon.

> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case.
> ---
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14543) Should use getAversion() while setting the zkacl

2017-06-21 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057396#comment-16057396
 ] 

Brahma Reddy Battula commented on HADOOP-14543:
---

bq.The Zookeeper#setACL() method sets the version number of the SetACLRequest. 
So ideally, we should be setting the version only and not the aversion.
data and acls versions are modified independently.Here {{version}} is just an 
argument which passed from client and added in request.

[Java doc here 
|https://github.com/apache/zookeeper/blob/master/src/java/main/org/apache/zookeeper/ZooKeeper.java#L2368]
{noformat} 
* Set the ACL for the node of the given path if such a node exists and the
 * given version matches the version of the node. Return the stat of the
 * node.
{noformat} 

> Should use getAversion() while setting the zkacl
> 
>
> Key: HADOOP-14543
> URL: https://issues.apache.org/jira/browse/HADOOP-14543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14543.patch
>
>
> while setting the zkacl we used {color:red}{{getVersion()}}{color} which is 
> dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If 
> there is any acl changes( i.e relam change/..) ,we set the ACL with 
> dataversion which will cause {color:#d04437}BADVersion {color}and 
> {color:#d04437}*process will not start*{color}. See 
> [here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804]
> {{zkClient.setACL(path, zkAcl, stat.getVersion());}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14560) WebHDFS: possibility to specify socket backlog size

2017-06-21 Thread Alexander Krasheninnikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Krasheninnikov updated HADOOP-14560:
--
Component/s: common

> WebHDFS: possibility to specify socket backlog size
> ---
>
> Key: HADOOP-14560
> URL: https://issues.apache.org/jira/browse/HADOOP-14560
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Alexander Krasheninnikov
>  Labels: webhdfs
>
> While operating WebHDFS at Badoo, we've faced issue, that hardcoded size of 
> socket backlog (128) is not enough for our purposes.
> When performing ~600 concurrent requests, clients receive "Connection 
> refused" error.
> We are proposing patch to make this backlog size configurable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14495) Add set options interface to FSDataOutputStreamBuilder

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057369#comment-16057369
 ] 

Steve Loughran commented on HADOOP-14495:
-

Looks nice!

# good (implicit) question about what to do if a mandatory is overridden with 
an optional. I go with your policy: downgrade.
# we can/should replace {{HadoopIllegalArgumentException with the base 
{{IllegalArguentException}}, for better {{Preconditions}} checks.
# in {{TestLocalFileSystem}}, use {{LambdaTestUtils.intercept}} for your 
assertion...the test code is a good place to start playing with closures.
# {{FileSystem}}  4150: does making that constructor package-private complicate 
life for subclasses? 

> Add set options interface to FSDataOutputStreamBuilder 
> ---
>
> Key: HADOOP-14495
> URL: https://issues.apache.org/jira/browse/HADOOP-14495
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14495.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057360#comment-16057360
 ] 

Steve Loughran commented on HADOOP-14457:
-

This is becoming an intersting little architectural issue

* I concur with the "auto-create parent dirs is bad" opinion; In the evolving 
builder API for new files, HADOOP-14365, I want it to *not* create parent dirs, 
so users of the new API can call mkdir first if they really want it.
* we should consider whether {{createNonRecursive()}} is actually lined up 
right here, or whether it needs changes too. (And tests; created HADOOP-14562 
there).
* we need to have an efficient codepath for the mkdirs/create sequence. One 
thing to consider here: if done correctly, in auth mode, the existence of a 
parent is sufficient to guarantee the existence of all its parents. Therefore 
you will only need to go up the tree for all nonexistent parents.
* ...and line up for that builder. So maybe the core inner create operation 
would take flags indicating parent dir policy, if create: walk up creating 
parents. If not: fail fast if parent != dir.
* As noted, multiple round trips are bad...not just in perf but in billable 
IOPs.
* and w.r.t capability checks, static interface capabilities cause problems in 
subclasses, witness {{Syncable}}. Some dynamic probe is the only solution agile 
enough to cope, IMO.

So: how about we optimise for the create-parent-must-exist and 
create-parent-probably-exists use cases, and treat the other one (no parent 
tree) as an outlier we can make expensive? 

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch, HADOOP-14457-HADOOP-13345.005.patch, 
> HADOOP-14457-HADOOP-13345.006.patch, HADOOP-14457-HADOOP-13345.007.patch, 
> HADOOP-14457-HADOOP-13345.008.patch, HADOOP-14457-HADOOP-13345.009.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = 

[jira] [Created] (HADOOP-14562) Contract tests to include coverage of {{createNonRecursive()}}

2017-06-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14562:
---

 Summary: Contract tests to include coverage of 
{{createNonRecursive()}} 
 Key: HADOOP-14562
 URL: https://issues.apache.org/jira/browse/HADOOP-14562
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.8.1
Reporter: Steve Loughran
Priority: Minor


There's no contract test for {{createNonRecursive()}}  as far as my IDE can 
see, just one in HDFS.

# review FS spec, make sure it is covered there.
# add it to the abstract creation contract test.
# fix anything which fails.

At a guess, the tests we need are
* verify it fails if there's no parent
* verify it works on root dir
* verify it fails if the parent is a file

* 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14558) RPC requests on a secure cluster are 10x slower due to expensive encryption and decryption

2017-06-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057343#comment-16057343
 ] 

Steve Loughran edited comment on HADOOP-14558 at 6/21/17 11:28 AM:
---

Duplicate of HADOOP-10768.

Problem here is that the java encryption APIs allocate new byte buffers, get 
expensive fast.

If it were to be fixed, the obvious solution would be to have an option for 
native encryption and a private API which would encrypt either in-place or to a 
preallocated buffer, the latter being able to compensate for the fact that 
encrypted blocks often get their sizes rounded up.

looks the like HADOOP-10768 patch could be viable, so its a matter of 
marshalling all the IPC reviewers needed to be happy with it. It's a big patch 
to a core piece of the code, so will no doubt face inertia —but would be 
invaluable


was (Author: ste...@apache.org):
Duplicate of HADOOP-10768.

Problem here is that the java encryption APIs allocate new byte buffers, get 
expensive fast.

If it were to be fixed, the obvious solution would be to have an option for 
native encryption and a private API which would encrypt either in-place or to a 
preallocated buffer, the latter being able to compensate for the fact that 
encrypted blocks often get their sizes rounded up. Talk to the intel developers 
and see if they have time to play with this?

> RPC requests on a secure cluster are 10x slower due to expensive encryption 
> and decryption 
> ---
>
> Key: HADOOP-14558
> URL: https://issues.apache.org/jira/browse/HADOOP-14558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mostafa Mokhtar
>Priority: Critical
>  Labels: impala, metadata, rpc
>
> While running a performance tests for Impala comparing secure and un-secure 
> clusters I noticed that metadata loading operations are 10x slower on a 
> cluster with Kerberos+SSL enabled. 
> hadoop.rpc.protection is set to privacy
> Any recommendations on how this can be mitigated? 10x slowdown is a big hit 
> for metadata loading. 
> The majority of the slowdown is coming from the two threads below. 
> {code}
> Stack Trace   Sample CountPercentage(%)
> org.apache.hadoop.ipc.Client$Connection.run() 5,212   46.586
>org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse()   5,203   
> 46.505
>   java.io.DataInputStream.readInt()   5,039   45.039
>  java.io.BufferedInputStream.read()   5,038   45.03
> java.io.BufferedInputStream.fill()5,038   45.03
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(byte[], int, 
> int) 5,036   45.013
>   java.io.FilterInputStream.read(byte[], int, int)5,036   
> 45.013
>  
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(byte[], int, 
> int)   5,036   45.013
> 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket()
>5,035   45.004
>
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(byte[], int, int) 4,775   
> 42.68
>   sun.security.jgss.GSSContextImpl.unwrap(byte[], 
> int, int, MessageProp)  4,775   42.68
>  
> sun.security.jgss.krb5.Krb5Context.unwrap(byte[], int, int, MessageProp) 
> 4,768   42.617
> 
> sun.security.jgss.krb5.WrapToken.getData()4,714   42.134
>
> sun.security.jgss.krb5.WrapToken.getData(byte[], int)  4,714   42.134
>   
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(byte[], int) 4,714   
> 42.134
>  
> sun.security.jgss.krb5.CipherHelper.decryptData(WrapToken, byte[], int, int, 
> byte[], int)3,083   27.556
> 
> sun.security.jgss.krb5.CipherHelper.des3KdDecrypt(WrapToken, byte[], int, 
> int, byte[], int)   3,078   27.512
>
> sun.security.krb5.internal.crypto.Des3.decryptRaw(byte[], int, byte[], 
> byte[], int, int)   3,076   27.494
>   
> sun.security.krb5.internal.crypto.dk.DkCrypto.decryptRaw(byte[], int, byte[], 
> byte[], int, int) 3,076   27.494
> {code}
> And 
> {code}
> Stack Trace   Sample CountPercentage(%)
> java.lang.Thread.run()3,379   30.202
>java.util.concurrent.ThreadPoolExecutor$Worker.run()   3,379   30.202
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)  
>   3,379   30.202
>

[jira] [Resolved] (HADOOP-14558) RPC requests on a secure cluster are 10x slower due to expensive encryption and decryption

2017-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14558.
-
Resolution: Duplicate

Duplicate of HADOOP-10768.

Problem here is that the java encryption APIs allocate new byte buffers, get 
expensive fast.

If it were to be fixed, the obvious solution would be to have an option for 
native encryption and a private API which would encrypt either in-place or to a 
preallocated buffer, the latter being able to compensate for the fact that 
encrypted blocks often get their sizes rounded up. Talk to the intel developers 
and see if they have time to play with this?

> RPC requests on a secure cluster are 10x slower due to expensive encryption 
> and decryption 
> ---
>
> Key: HADOOP-14558
> URL: https://issues.apache.org/jira/browse/HADOOP-14558
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Mostafa Mokhtar
>Priority: Critical
>  Labels: impala, metadata, rpc
>
> While running a performance tests for Impala comparing secure and un-secure 
> clusters I noticed that metadata loading operations are 10x slower on a 
> cluster with Kerberos+SSL enabled. 
> hadoop.rpc.protection is set to privacy
> Any recommendations on how this can be mitigated? 10x slowdown is a big hit 
> for metadata loading. 
> The majority of the slowdown is coming from the two threads below. 
> {code}
> Stack Trace   Sample CountPercentage(%)
> org.apache.hadoop.ipc.Client$Connection.run() 5,212   46.586
>org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse()   5,203   
> 46.505
>   java.io.DataInputStream.readInt()   5,039   45.039
>  java.io.BufferedInputStream.read()   5,038   45.03
> java.io.BufferedInputStream.fill()5,038   45.03
>
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(byte[], int, 
> int) 5,036   45.013
>   java.io.FilterInputStream.read(byte[], int, int)5,036   
> 45.013
>  
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(byte[], int, 
> int)   5,036   45.013
> 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket()
>5,035   45.004
>
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(byte[], int, int) 4,775   
> 42.68
>   sun.security.jgss.GSSContextImpl.unwrap(byte[], 
> int, int, MessageProp)  4,775   42.68
>  
> sun.security.jgss.krb5.Krb5Context.unwrap(byte[], int, int, MessageProp) 
> 4,768   42.617
> 
> sun.security.jgss.krb5.WrapToken.getData()4,714   42.134
>
> sun.security.jgss.krb5.WrapToken.getData(byte[], int)  4,714   42.134
>   
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(byte[], int) 4,714   
> 42.134
>  
> sun.security.jgss.krb5.CipherHelper.decryptData(WrapToken, byte[], int, int, 
> byte[], int)3,083   27.556
> 
> sun.security.jgss.krb5.CipherHelper.des3KdDecrypt(WrapToken, byte[], int, 
> int, byte[], int)   3,078   27.512
>
> sun.security.krb5.internal.crypto.Des3.decryptRaw(byte[], int, byte[], 
> byte[], int, int)   3,076   27.494
>   
> sun.security.krb5.internal.crypto.dk.DkCrypto.decryptRaw(byte[], int, byte[], 
> byte[], int, int) 3,076   27.494
> {code}
> And 
> {code}
> Stack Trace   Sample CountPercentage(%)
> java.lang.Thread.run()3,379   30.202
>java.util.concurrent.ThreadPoolExecutor$Worker.run()   3,379   30.202
>   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker)  
>   3,379   30.202
>  java.util.concurrent.FutureTask.run()3,367   30.095
> java.util.concurrent.Executors$RunnableAdapter.call() 3,367   
> 30.095
>org.apache.hadoop.ipc.Client$Connection$3.run()3,367   
> 30.095
>   java.io.DataOutputStream.flush()3,367   30.095
>  java.io.BufferedOutputStream.flush() 3,367   30.095
> java.io.BufferedOutputStream.flushBuffer()3,367   
> 30.095
>
> org.apache.hadoop.security.SaslRpcClient$WrappedOutputStream.write(byte[], 
> int, int)   3,367   30.095
>   
> com.sun.security.sasl.gsskerb.GssKrb5Base.wrap(byte[], int, int)3,281 
>   

[jira] [Resolved] (HADOOP-14561) Making HAR mutable

2017-06-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14561.
-
Resolution: Won't Fix

Problem here is that the Hadoop APIs don't let us seek() into a file and 
overwrite bits of it; all you get is append(), and that isn't supported in many 
filesystems.

Even with append(), that'd force a move to some log-structured format where all 
changes went in at the end...that'd get expensive fast if you were updating 
large files inside the HAR.

Have to close as a wontfix, unless anyone can come up with a better solution.

> Making HAR mutable
> --
>
> Key: HADOOP-14561
> URL: https://issues.apache.org/jira/browse/HADOOP-14561
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Navab
>
> HAR files are immutable, thus even minimal changes to an existing HAR file 
> results in new HAR file.  So, making HAR mutable would be beneficial.   



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >