[jira] [Commented] (HADOOP-14581) Restrict setOwner to list of user when security is enabled in wasb

2017-07-05 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075937#comment-16075937
 ] 

Varada Hemeswari commented on HADOOP-14581:
---

[~steve_l], [~liuml07] Can you please take a look at the recent patch?

> Restrict setOwner to list of user when security is enabled in wasb
> --
>
> Key: HADOOP-14581
> URL: https://issues.apache.org/jira/browse/HADOOP-14581
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14581.1.patch, HADOOP-14581.2.patch
>
>
> Currently in azure FS, setOwner api is exposed to all the users accessing the 
> file system.
> When Authorization is enabled, access to some files/folders is given to 
> particular users based on whether the user is the owner of the file.
> So setOwner has to be restricted to limited set of users to prevent users 
> from exploiting owner based authorization of files and folders.
> Introducing a new config called fs.azure.chown.allowed.userlist which is a 
> comma seperated list of users who are allowed to perform chown operation when 
> authorization is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075909#comment-16075909
 ] 

Hadoop QA commented on HADOOP-13435:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 29 new + 118 unchanged 
- 0 fixed = 147 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875842/HADOOP-13435.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d4adef6c3413 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12721/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12721/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12721/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075892#comment-16075892
 ] 

Hadoop QA commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
59s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 18s{color} 
| {color:red} root generated 17 new + 1346 unchanged - 0 fixed = 1363 total 
(was 1346) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 17 unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14624 |
| GITHUB PR | https://github.com/apache/hadoop/pull/245 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f04ed3c5cb93 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12723/artifact/patchprocess/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12723/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12723/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was 

[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075848#comment-16075848
 ] 

Wenxin He commented on HADOOP-14587:


The test failures are unrelated. They are OK in my running.

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch, HADOOP-14587.008.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075843#comment-16075843
 ] 

Hadoop QA commented on HADOOP-14587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 26s{color} 
| {color:red} root generated 17 new + 1342 unchanged - 4 fixed = 1359 total 
(was 1346) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
480 unchanged - 2 fixed = 480 total (was 482) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875844/HADOOP-14587.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4b43e0c400ac 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12722/artifact/patchprocess/diff-compile-javac-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12722/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12722/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-nfs U: hadoop-common-project |
| Console output 

[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14624:
---
Status: Patch Available  (was: Open)

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14624:
---
Attachment: HADOOP-14624.002.patch

002: Fixed checkstyle issue.

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14624:
---
Status: Open  (was: Patch Available)

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Open  (was: Patch Available)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch, HADOOP-14587.008.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Attachment: HADOOP-14587.008.patch

008.patch:
change {{assertTrue(boolean)}} to {{assertEquals(expected, actual)}}.

Thanks, [~ajisakaa].

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch, HADOOP-14587.008.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Patch Available  (was: Open)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch, HADOOP-14587.008.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075757#comment-16075757
 ] 

Wenxin He commented on HADOOP-14587:


Yes, verification errors of  assertEquals(expected, actual) are more readable 
in this situation. A new patch coming soon.

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats

2017-07-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13435:
---
Attachment: HADOOP-13435.003.patch

V3 is to fix the findbugs and checkstyle warnings.

> Add thread local mechanism for aggregating file system storage stats
> 
>
> Key: HADOOP-13435
> URL: https://issues.apache.org/jira/browse/HADOOP-13435
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, 
> HADOOP-13435.002.patch, HADOOP-13435.003.patch
>
>
> As discussed in [HADOOP-13032], this is to add thread local mechanism for 
> aggregating file system storage stats. This class will also be used in 
> [HADOOP-13031], which is to separate the distance-oriented rack-aware read 
> bytes logic from {{FileSystemStorageStatistics}} to new 
> DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the 
> {{FileSystemStorageStatistics}} can live without the to-be-removed 
> {{FileSystem$Statistics}} implementation.
> A unit test should also be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-07-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075717#comment-16075717
 ] 

Aaron Fabbri commented on HADOOP-14553:
---

Integration tests passed in US West via `mvn -T 1C clean verify`:

{noformat}
Tests run: 323, Failures: 0, Errors: 0, Skipped: 70

[INFO]
[INFO] --- maven-enforcer-plugin:1.4.1:enforce (depcheck) @ hadoop-azure ---
[INFO]
[INFO] --- maven-failsafe-plugin:2.17:verify (default) @ hadoop-azure ---
[INFO] Failsafe report directory: 
/Users/fabbri/Code/hadoop/hadoop-tools/hadoop-azure/target/failsafe-reports
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 51:46 min (Wall Clock)
{noformat}

I am +1 (nonbinding) provided we link some followup JIRAs for the common code 
refactoring and any other work you think needs to be done next.

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-07-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075714#comment-16075714
 ] 

Aaron Fabbri commented on HADOOP-14553:
---

Thank you for doing this work [~ste...@apache.org].  I'm running the tests in 
US West now.  

Overall this patch contains a lot of goodness.  Besides the ITest refactoring I 
see a lot of improved test cleanup and some reduced code duplication.  I wonder 
if some of the common code pasted from S3A could be factored out (huge files 
and test utils classes).  That said, this is an improvement overall so I'd be 
fine with some lower-priority JIRAs to follow up on factoring out the common 
hugefiles and testutils stuff.

{noformat}
+
+  1
+
+  false
+  ${maven-surefire-plugin.argLine} 
-DminiClusterDedicatedDirs=true
{noformat}

Is this a temporary workaround for a unit-test parallelization issue?

{noformat}
+public class ITestNativeAzureFileSystemContractLive extends

-  @Before
+@Before
   public void setUp() throws Exception {
{noformat}

Formatting nit at Before annotation.

{noformat}
-public class TestNativeAzureFileSystemContractMocked extends
+/**
+ * Mocked testing of FileSystemContractBaseTest.
+ * This isn't an IT, but making it so makes it a lot faster for now.
+ */
+public class ITestNativeAzureFileSystemContractMocked extends
{noformat}

Just curious, why is it faster as ITest?

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075706#comment-16075706
 ] 

Mingliang Liu commented on HADOOP-14627:


Current change is good. I just propose the general idea. Thanks!

> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-07-05 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075698#comment-16075698
 ] 

Atul Sikaria commented on HADOOP-14627:
---

Thanks [~liuml07], I thought about that too. However, I went with one Jira for 
this time because 1) The combined change is small (~30-40 lines of code) so it 
was small enough already. 2) The changes for  the two auth methods were very 
similar, so I thought it would make it easier to review them together.  3) The 
biggest change is to the doc file (index.md), which would be easier to see as a 
final doc containing both, rather than individual isolated changes for each.

Having said that, this is my perspective (from the patch creator's side), so I 
am only guessing at your usability in reading the change. Let me know if the 
current change is not as easy to review as I thought; if so, I can break it up.



> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075640#comment-16075640
 ] 

Mingliang Liu commented on HADOOP-14627:


If there are multiple changes that are separable, we can make this one a Uber 
JIRA and create subtasks for it. Small and individual changes are better for 
review, test and release. Thanks,

> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075613#comment-16075613
 ] 

Mingliang Liu commented on HADOOP-14443:


I ran the tests for the branch-2 patch, and found the following errors.

{code}
Failed tests:
  TestWasbRemoteCallHelper.testMalFormedJSONResponse
Expected: (an instance of org.apache.hadoop.fs.azure.WasbAuthorizationException 
and exception with message a string containing 
"com.fasterxml.jackson.core.JsonParseException: Unexpected end-of-input in 
FIELD_NAME")
 but: an instance of org.apache.hadoop.fs.azure.WasbAuthorizationException 
 is a 
org.codehaus.jackson.JsonParseException
Stacktrace was: org.codehaus.jackson.JsonParseException: Unexpected 
end-of-input within/between OBJECT entries
 at [Source: java.io.StringReader@6b3e12b5; line: 1, column: 131]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1433)
at 
org.codehaus.jackson.impl.ReaderBasedParser._skipWS(ReaderBasedParser.java:1470)
at 
org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:425)
at 
org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:690)
at 
org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:580)
at 
org.codehaus.jackson.map.ObjectReader._bindAndClose(ObjectReader.java:768)
at 
org.codehaus.jackson.map.ObjectReader.readValue(ObjectReader.java:460)
at 
org.apache.hadoop.fs.azure.RemoteWasbAuthorizerImpl.authorize(RemoteWasbAuthorizerImpl.java:153)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1468)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1704)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1554)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1067)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1048)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:937)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:925)
at 
org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.performop(TestWasbRemoteCallHelper.java:445)
at 
org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testMalFormedJSONResponse(TestWasbRemoteCallHelper.java:247)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

Can you fix that on branch-2? Thanks,

> Azure: Support retry and client side failover for authorization, SASKey and 
> delegation token generation
> 

[jira] [Updated] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-07-05 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Summary: Enable new features of ADLS SDK (MSI, Device Code auth)  (was: 
Enable new features fro ADLS SDK)

> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075392#comment-16075392
 ] 

Lukas Waldmann commented on HADOOP-1:
-

1) that i can do :)
2) yes but it's not a reason not to support is't it? 
SftpChannel#close - you are right, I mistakenly spoke about disconnect method 
of Channel, not close method - close indeed close the connection. 
You are probably right (lack of documentation of jsch is really annoying) - I 
should call session.disconnect as well - or may be easier not call channel 
disconnect as session.disconnect will close all open channels. As I am opening 
just one channel it shouldn't do any damage. Ideally I should check if there 
are any open channels left and if not than close the session, but i didn't find 
 any method how to find this information. Thanks for pointing this out  

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Enable new features fro ADLS SDK

2017-07-05 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Attachment: HADOOP-14627-001.patch

Attached patch with the current code. This is to provide early opportunity to 
review, it should not be checked in until the SDK version it is based on 
removes the preview label.

> Enable new features fro ADLS SDK
> 
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14627) Enable new features fro ADLS SDK

2017-07-05 Thread Atul Sikaria (JIRA)
Atul Sikaria created HADOOP-14627:
-

 Summary: Enable new features fro ADLS SDK
 Key: HADOOP-14627
 URL: https://issues.apache.org/jira/browse/HADOOP-14627
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/adl
 Environment: MSI Change applies only to Hadoop running in an Azure VM
Reporter: Atul Sikaria
Assignee: Atul Sikaria


This change is to upgrade the Hadoop ADLS connector to enable new auth features 
exposed by the ADLS Java SDK.

Specifically:
MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
an Azure Service. In the case of VMs, they can be used to give an identity to a 
VM deployment. This simplifies managing Service Principals, since the creds 
don’t have to be managed in core-site files anymore. The way this works is that 
during VM deployment, the ARM (Azure Resource Manager) template needs to be 
modified to enable MSI. Once deployed, the MSI extension runs a service on the 
VM that exposes a token endpoint to http://localhost at a port specified in the 
template. The SDK has a new TokenProvider to fetch the token from this local 
endpoint. This change would expose that TokenProvider as an auth option.

DeviceCode auth: This enables a token to be obtained from an interactive login. 
The user is given a URL and a token to use on the login screen. User can use 
the token to login from any device. Once the login is done, the token that is 
obtained is in the name of the user who logged in. Note that because of the 
interactive login involved, this is not very suitable for job scenarios, but 
can work for ad-hoc scenarios like running “hdfs dfs” commands.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14608) KMS JMX servlet path not backwards compatible

2017-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075249#comment-16075249
 ] 

Hudson commented on HADOOP-14608:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11970 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11970/])
HADOOP-14608. KMS JMX servlet path not backwards compatible. Contributed 
(jzhuge: rev 946dd256755109ca57d9cfa0912eef8402450181)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/resources/webapps/kms/WEB-INF/web.xml
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> KMS JMX servlet path not backwards compatible
> -
>
> Key: HADOOP-14608
> URL: https://issues.apache.org/jira/browse/HADOOP-14608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14608.001.patch, HADOOP-14608.002.patch
>
>
> HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed 
> JMX path from /kms/jmx to /jmx, which is inline with other HttpServer2 based 
> servlets.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075214#comment-16075214
 ] 

Mingliang Liu commented on HADOOP-13435:


Hi [~Hongyuan Li], thanks for the comment. But I think the {{FileSystem#Cache}} 
are are not related to this patch. We can file separate JIRA for moving the 
nested class out of the long FileSystem class.

> Add thread local mechanism for aggregating file system storage stats
> 
>
> Key: HADOOP-13435
> URL: https://issues.apache.org/jira/browse/HADOOP-13435
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, 
> HADOOP-13435.002.patch
>
>
> As discussed in [HADOOP-13032], this is to add thread local mechanism for 
> aggregating file system storage stats. This class will also be used in 
> [HADOOP-13031], which is to separate the distance-oriented rack-aware read 
> bytes logic from {{FileSystemStorageStatistics}} to new 
> DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the 
> {{FileSystemStorageStatistics}} can live without the to-be-removed 
> {{FileSystem$Statistics}} implementation.
> A unit test should also be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14619) S3A authenticators to log origin of .secret.key options

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075196#comment-16075196
 ] 

Mingliang Liu commented on HADOOP-14619:


+1 for the proposal.

{{getPropertySources()}} can be helpful here.

> S3A authenticators to log origin of .secret.key options
> ---
>
> Key: HADOOP-14619
> URL: https://issues.apache.org/jira/browse/HADOOP-14619
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Even though we can't log the values of the id, secret and session options, we 
> could aid debugging what's going on with auth failures by logging the origin 
> of the values.
> e.g.
> {code}
> DEBUG authenticating with secrets obtained from hive-site.xml
> DEBUG authenticating with secrets obtained from hive-site.xml and bucket 
> options landsat
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14608) KMS JMX servlet path not backwards compatible

2017-07-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14608:

Description: HADOOP-13597 switched KMS from Tomcat to Jetty. The 
implementation changed JMX path from /kms/jmx to /jmx, which is inline with 
other HttpServer2 based servlets.  (was: HADOOP-13597 switched KMS from Tomcat 
to Jetty. The implementation changed JMX path from /kms/jmx to /jmx, which is 
inline with other HttpServer2 based servlets.

If there is a desire for the same JMX path, please vote here.)

> KMS JMX servlet path not backwards compatible
> -
>
> Key: HADOOP-14608
> URL: https://issues.apache.org/jira/browse/HADOOP-14608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14608.001.patch, HADOOP-14608.002.patch
>
>
> HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed 
> JMX path from /kms/jmx to /jmx, which is inline with other HttpServer2 based 
> servlets.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14608) KMS JMX servlet path not backwards compatible

2017-07-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14608:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk with checkstyle fix.

Thanks [~andrew.wang] for the review!

> KMS JMX servlet path not backwards compatible
> -
>
> Key: HADOOP-14608
> URL: https://issues.apache.org/jira/browse/HADOOP-14608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14608.001.patch, HADOOP-14608.002.patch
>
>
> HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed 
> JMX path from /kms/jmx to /jmx, which is inline with other HttpServer2 based 
> servlets.
> If there is a desire for the same JMX path, please vote here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14608) KMS JMX servlet path not backwards compatible

2017-07-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075169#comment-16075169
 ] 

Andrew Wang commented on HADOOP-14608:
--

+1 thanks for working on this John!

> KMS JMX servlet path not backwards compatible
> -
>
> Key: HADOOP-14608
> URL: https://issues.apache.org/jira/browse/HADOOP-14608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14608.001.patch, HADOOP-14608.002.patch
>
>
> HADOOP-13597 switched KMS from Tomcat to Jetty. The implementation changed 
> JMX path from /kms/jmx to /jmx, which is inline with other HttpServer2 based 
> servlets.
> If there is a desire for the same JMX path, please vote here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) initial s3guard preview

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075159#comment-16075159
 ] 

Mingliang Liu commented on HADOOP-13998:


Thanks Steve for the list, I'll review those related JIRAs.

> initial s3guard preview
> ---
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14596) AWS SDK 1.11+ aborts() on close() if > 0 bytes in stream; logs error

2017-07-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075153#comment-16075153
 ] 

Mingliang Liu commented on HADOOP-14596:


Steve, doc update will be helpful. Should we file a JIRA for that?

> AWS SDK 1.11+ aborts() on close() if > 0 bytes in stream; logs error
> 
>
> Key: HADOOP-14596
> URL: https://issues.apache.org/jira/browse/HADOOP-14596
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HADOOP-14596-001.patch, HADOOP-14596-002.patch, 
> testlog.txt
>
>
> The latest SDK now tells us off when we do a seek() by aborting the TCP stream
> {code}
> - Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
> connection. This is likely an error and may result in sub-optimal behavior. 
> Request only the bytes you need via a ranged GET or drain the input stream 
> after use.
> 2017-06-27 15:47:35,789 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
> internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - 
> Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
> connection. This is likely an error and may result in sub-optimal behavior. 
> Request only the bytes you need via a ranged GET or drain the input stream 
> after use.
> 2017-06-27 15:47:37,409 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
> internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - 
> Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
> connection. This is likely an error and may result in sub-optimal behavior. 
> Request only the bytes you need via a ranged GET or drain the input stream 
> after use.
> 2017-06-27 15:47:39,003 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
> internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - 
> Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
> connection. This is likely an error and may result in sub-optimal behavior. 
> Request only the bytes you need via a ranged GET or drain the input stream 
> after use.
> 2017-06-27 15:47:40,627 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
> internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - 
> Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
> connection. This is likely an error and may result in sub-optimal behavior. 
> Request only the bytes you need via a ranged GET or drain the input stream 
> after use.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14576) DynamoDB tables may leave ACTIVE state after initial connection

2017-07-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16075123#comment-16075123
 ] 

Aaron Fabbri commented on HADOOP-14576:
---

Thanks for filing the JIRA, [~mackrorysd].

{quote}
My concern with failing over to S3 for non-auth read is what happens when 
you're listing stuff that isn't consistent on S3 yet. IMO non-auth mode is 
really just to enable lazily loading data that already existed or that is added 
outside of S3Guard. I don't think it should weaken guarantees in the presence 
of partitioning events in DynamoDB.
{quote}

Yeah I think both arguments have merit.

We could argue that failing back to basic S3 without consistency is better than 
failing a job.  We could also have a configuration flag that lets users choose 
either behavior.Since the chance of inconsistency is pretty low,  there is 
a good probability that running in degraded mode (no S3Guard) until the table 
comes back would be successful.

I'm not sure authoritative mode matters?  The client being in S3guard 
authoritative mode just means the the FS client *may* skip round trips to S3 
*if* the metadatastore reports it has a full listing.   Since MetadatStore 
throws an error, the "if metadatastore reports full listing" condition is not 
met.

> DynamoDB tables may leave ACTIVE state after initial connection
> ---
>
> Key: HADOOP-14576
> URL: https://issues.apache.org/jira/browse/HADOOP-14576
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Sean Mackrory
>
> We currently only anticipate tables not being in the ACTIVE state when first 
> connecting. It is possible for a table to be in the ACTIVE state and move to 
> an UPDATING state during partitioning events. Attempts to read or write 
> during that time will result in an AmazonServerException getting thrown. We 
> should try to handle that better...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14563) LoadBalancingKMSClientProvider#warmUpEncryptedKeys swallows IOException

2017-07-05 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074967#comment-16074967
 ] 

Rushabh S Shah commented on HADOOP-14563:
-

{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.ha.TestZKFailoverController
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.87 sec - in 
org.apache.hadoop.ha.TestZKFailoverController
Results :
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0
{noformat}
{{TestZKFailoverController}} passes locally on my node.
[~jojochuang]: Mind giving a quick review.
Addressed your comment in the last patch.
Hopefully should be the last pass.
Thanks for the review.

> LoadBalancingKMSClientProvider#warmUpEncryptedKeys swallows IOException
> ---
>
> Key: HADOOP-14563
> URL: https://issues.apache.org/jira/browse/HADOOP-14563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14563-1.patch, HADOOP-14563-2.patch, 
> HADOOP-14563.patch
>
>
> TestAclsEndToEnd is failing consistently in HADOOP-14521.
> The reason behind it is LoadBalancingKMSClientProvider#warmUpEncryptedKeys 
> swallows IOException while KMSClientProvider#warmUpEncryptedKeys throws all 
> the way back to createEncryptionZone and creation of EZ fails.
> Following are the relevant piece of code snippets.
>  {code:title=KMSClientProvider.java|borderStyle=solid}
>   @Override
>   public void warmUpEncryptedKeys(String... keyNames)
>   throws IOException {
> try {
>   encKeyVersionQueue.initializeQueuesForKeys(keyNames);
> } catch (ExecutionException e) {
>   throw new IOException(e);
> }
>   }
> {code}
>  {code:title=LoadBalancingKMSClientProvider.java|borderStyle=solid}
>// This request is sent to all providers in the load-balancing group
>   @Override
>   public void warmUpEncryptedKeys(String... keyNames) throws IOException {
> for (KMSClientProvider provider : providers) {
>   try {
> provider.warmUpEncryptedKeys(keyNames);
>   } catch (IOException ioe) {
> LOG.error(
> "Error warming up keys for provider with url"
> + "[" + provider.getKMSUrl() + "]", ioe);
>   }
> }
>   }
> {code}
> In HADOOP-14521, I intend to always instantiate 
> LoadBalancingKMSClientProvider even if there is only one provider so that the 
> retries can applied at only one place.
> We need to decide whether we want to fail in both the case or continue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074965#comment-16074965
 ] 

Allen Wittenauer commented on HADOOP-14623:
---

I believe setting acks to 0 is intentional so that it doesn't block.  
Additionally, metrics at scale are usually lossy (e.g., Ganglia using UDP) so 
that the receiving end can actually handle the load. 

> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074908#comment-16074908
 ] 

Hongyuan Li commented on HADOOP-13743:
--

why not using String.format or stringBuilder ? 

> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074831#comment-16074831
 ] 

Hongyuan Li edited comment on HADOOP-1 at 7/5/17 2:46 PM:
--

1、if you rename {{AbstractFTPFileSystem}} to anything else, i would be more 
happier.This class name may make users thinking sftp is one of ftp, however, it 
isn't. So, please rename it to anything without ftp?
2、{{FTPClient}}#{{retrieStream}} will open a new Data connection, which is the 
reason why i dislike the seek ops.

*Update*
you said {{SFTPChannel}}#{{close}}  just to resue session , but why do you 
disconnect channelSftp


was (Author: hongyuan li):
1、if you rename {{AbstractFTPFileSystem}} to anything else, i would be more 
happier.This class name may make users thinking sftp is one of ftp, however, it 
isn't. So, please rename it to anything without ftp?
2、{{FTPClient}}#{{retrieStream}} will open a new Data connection, which is the 
reason why i dislike the seek ops.


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074831#comment-16074831
 ] 

Hongyuan Li edited comment on HADOOP-1 at 7/5/17 2:36 PM:
--

1、if you rename {{AbstractFTPFileSystem}} to anything else, i would be more 
happier.This class name may make users thinking sftp is one of ftp, however, it 
isn't. So, please rename it to anything without ftp?
2、{{FTPClient}}#{{retrieStream}} will open a new Data connection, which is the 
reason why i dislike the seek ops.



was (Author: hongyuan li):
1、if you rename "AbstractFTPFileSystem" to anything else, i would be more 
happier.This class name may make users thinking sftp is one of ftp, however, it 
isn't.
2、FTPClient#retrieStream will open a new Data connection, which is the reason 
why i dislike the seek ops.


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074856#comment-16074856
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/5/17 2:35 PM:
--

[~jojochuang] hard to write an only junit test to test it.

the infos about acks is from kafka document :
{code}
*request.required.acks* // using old Producer api or the version of kafka is 
less  than 0.9.x
or
*acks* // using new Producer api and kafka version more than 0.9.x

This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]
[Documentation Kafka 0.9.0|http://kafka.apache.org/090/documentation.html]
FROM the link below, if you use kafka below 0.9.x, should set  
{{request.required.acks = 1}} at least.When use new Producer above 0.9.x, 
should set {{acks = 1}} at least.



was (Author: hongyuan li):
[~jojochuang] hard to write an only junit test to test it.

the infos about acks is from kafka document :
{code}
{{request.required.acks}} // using old Producer api or the version of kafka is 
less  than 0.9.x
or
{{acks}} // using new Producer api and kafka version more than 0.9.x

This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]
[Documentation Kafka 0.9.0|http://kafka.apache.org/090/documentation.html]
FROM the link below, if you use kafka below 0.9.x, should set  
{{request.required.acks = 1}} at least.When use new Producer above 0.9.x, 
should set {{acks = 1}} at least.


> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074856#comment-16074856
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/5/17 2:34 PM:
--

[~jojochuang] hard to write an only junit test to test it.

the infos about acks is from kafka document :
{code}
{{request.required.acks}} // using old Producer api or the version of kafka is 
less  than 0.9.x
or
{{acks}} // using new Producer api and kafka version more than 0.9.x

This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]
[Documentation Kafka 0.9.0|http://kafka.apache.org/090/documentation.html]
FROM the link below, if you use kafka below 0.9.x, should set  
{{request.required.acks = 1}} at least.When use new Producer above 0.9.x, 
should set {{acks = 1}} at least.



was (Author: hongyuan li):
[~jojochuang] hard to write an only junit test to test it.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.


> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074856#comment-16074856
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/5/17 2:30 PM:
--

[~jojochuang] hard to write an only junit test to test it.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.



was (Author: hongyuan li):
[~jojochuang] i will try to test it, but iam not sure it can be relised.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.


> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074856#comment-16074856
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/5/17 2:30 PM:
--

[~jojochuang] i will try to test it, but iam not sure it can be relised.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.2|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.



was (Author: hongyuan li):
[~jojochuang] i will try to test it, but iam not sure it can be relised.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.1|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.


> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074856#comment-16074856
 ] 

Hongyuan Li commented on HADOOP-14623:
--

[~jojochuang] i will try to test it, but iam not sure it can be relised.

the infos about acks is from kafka document :
{code}
request.required.acks   
This value controls when a produce request is considered completed. 
Specifically, how many other brokers must have committed the data to their log 
and acknowledged this to the leader? Typical values are
0, which means that the producer never waits for an acknowledgement from the 
broker (the same behavior as 0.7). This option provides the lowest latency but 
the weakest durability guarantees (some data will be lost when a server fails).
1, which means that the producer gets an acknowledgement after the leader 
replica has received the data. This option provides better durability as the 
client waits until the server acknowledges the request as successful (only 
messages that were written to the now-dead leader but not yet replicated will 
be lost).
-1, which means that the producer gets an acknowledgement after all in-sync 
replicas have received the data. This option provides the best durability, we 
guarantee that no messages will be lost as long as at least one in sync replica 
remains.
{code}

[DocumentationKafka 0.8.1|http://kafka.apache.org/082/documentation.html]

FROM the link below, if you use kafka below 0.9.x, should set  
request.required.acks = 1 at least.When use new Producer above 0.9.x, should 
set acks = 1 at least.


> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074831#comment-16074831
 ] 

Hongyuan Li commented on HADOOP-1:
--

1、if you rename "AbstractFTPFileSystem" to anything else, i would be more 
happier.This class name may make users thinking sftp is one of ftp, however, it 
isn't.
2、FTPClient#retrieStream will open a new Data connection, which is the reason 
why i dislike the seek ops.


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074698#comment-16074698
 ] 

Wei-Chiu Chuang commented on HADOOP-14623:
--

Sorry I don't know Kafka enough to make an effective review. Is it possible to 
add a test to ensure it works as expected? Any tutorial/guide on the expected 
value of this property?

Thanks.

> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy

2017-07-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074687#comment-16074687
 ] 

Wei-Chiu Chuang commented on HADOOP-14626:
--

Hello saurab, thanks for filing the jira.
Would you like to also post a stacktrace or elaborate your bug report?

> NoSuchMethodError in 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
> 
>
> Key: HADOOP-14626
> URL: https://issues.apache.org/jira/browse/HADOOP-14626
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: saurab
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14626) NoSuchMethodError in org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy

2017-07-05 Thread saurab (JIRA)
saurab created HADOOP-14626:
---

 Summary: NoSuchMethodError in 
org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy
 Key: HADOOP-14626
 URL: https://issues.apache.org/jira/browse/HADOOP-14626
 Project: Hadoop Common
  Issue Type: Bug
Reporter: saurab






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074605#comment-16074605
 ] 

Hadoop QA commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 21s{color} 
| {color:red} root generated 17 new + 1346 unchanged - 0 fixed = 1363 total 
(was 1346) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 17 unchanged - 1 fixed = 18 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestGroupsCaching |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14624 |
| GITHUB PR | https://github.com/apache/hadoop/pull/245 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 686f3a7db1fe 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b17e655 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12719/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12719/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12719/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12719/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12719/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  

[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074602#comment-16074602
 ] 

Akira Ajisaka commented on HADOOP-14587:


Thanks! One additional minor nit:
{code}
+assertTrue(toLevel("INFO") == Level.INFO);
+assertTrue(toLevel("NonExistLevel") == Level.DEBUG);
+assertTrue(toLevel("INFO", Level.TRACE) == Level.INFO);
+assertTrue(toLevel("NonExistLevel", Level.TRACE) == Level.TRACE);
{code}
Would you use assertEquals(expected, actual)?

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074599#comment-16074599
 ] 

Hadoop QA commented on HADOOP-14587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 47s{color} 
| {color:red} root generated 17 new + 1342 unchanged - 4 fixed = 1359 total 
(was 1346) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
480 unchanged - 2 fixed = 480 total (was 482) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14587 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875734/HADOOP-14587.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 73ddc1cba8ef 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b17e655 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12720/artifact/patchprocess/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12720/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-nfs U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12720/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use 

[jira] [Commented] (HADOOP-13414) Hide Jetty Server version header in HTTP responses

2017-07-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074588#comment-16074588
 ] 

Hudson commented on HADOOP-13414:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11968 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11968/])
HADOOP-13414. Hide Jetty Server version header in HTTP responses. 
(vinayakumarb: rev a180ba408128b2d916822e78deb979bbcd1894da)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> Hide Jetty Server version header in HTTP responses
> --
>
> Key: HADOOP-13414
> URL: https://issues.apache.org/jira/browse/HADOOP-13414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Vinayakumar B
>Assignee: Surendra Singh Lilhore
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: Aftrerfix.png, BeforeFix.png, HADOOP-13414-001.patch, 
> HADOOP-13414-002.patch, HADOOP-13414-branch-2.patch
>
>
> Hide Jetty Server version in HTTP Response header. Some security analyzers 
> would think this as an issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13414) Hide Jetty Server version header in HTTP responses

2017-07-05 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13414:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

+1 for branch-2 patch.
Committed to trunk, branch-2 and branch-2.8
Thanks [~surendrasingh] for the contribution

> Hide Jetty Server version header in HTTP responses
> --
>
> Key: HADOOP-13414
> URL: https://issues.apache.org/jira/browse/HADOOP-13414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Vinayakumar B
>Assignee: Surendra Singh Lilhore
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: Aftrerfix.png, BeforeFix.png, HADOOP-13414-001.patch, 
> HADOOP-13414-002.patch, HADOOP-13414-branch-2.patch
>
>
> Hide Jetty Server version in HTTP Response header. Some security analyzers 
> would think this as an issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074555#comment-16074555
 ] 

Lukas Waldmann commented on HADOOP-1:
-

Steve
Everything with Test* can be started without any endpoint configured, right? - 
correct
Even contract ITest will run without endpoint configuration if 
fs.contract.use.internal.server is set to true (params.xml)
cache based on host - i will add it
logging - ok, will check
contract code sharing - i will try but so far i wan't successful - junit won't 
pick up the parameters correctly
other comments - wait for next patch :)

Hongyuan,
of course protocols are different but nature of working with them is the same. 
Point is to have as much code as possible to be shared so if some change comes 
in you don't have to fix all over the place


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12802) local FileContext does not rename .crc file

2017-07-05 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reassigned HADOOP-12802:
-

Assignee: Andras Bokor

> local FileContext does not rename .crc file
> ---
>
> Key: HADOOP-12802
> URL: https://issues.apache.org/jira/browse/HADOOP-12802
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Youngjoon Kim
>Assignee: Andras Bokor
>
> After run the following code, "old" file is renamed to "new", but ".old.crc" 
> is not renamed to ".new.crc"
> {code}
> Path oldPath = new Path("/tmp/old");
> Path newPath = new Path("/tmp/new");
> Configuration conf = new Configuration();
> FileContext fc = FileContext.getLocalFSFileContext(conf);
> FSDataOutputStream out = fc.create(oldPath, EnumSet.of(CreateFlag.CREATE));
> out.close();
> fc.rename(oldPath, newPath);
> {code}
> On the other hand, local FileSystem successfully renames .crc file.
> {code}
> Path oldPath = new Path("/tmp/old");
> Path newPath = new Path("/tmp/new");
> Configuration conf = new Configuration();
> FileSystem fs = FileSystem.getLocal(conf);
> FSDataOutputStream out = fs.create(oldPath);
> out.close();
> fs.rename(oldPath, newPath);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14620) S3A authentication failure for regions other than us-east-1

2017-07-05 Thread Ilya Fourmanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074551#comment-16074551
 ] 

Ilya Fourmanov commented on HADOOP-14620:
-

Upon further investigation it turned out that setting  
fs.s3a.bucket.dshbasebackup.endpoint=s3.eu-west-1.amazonaws.com seems to have 
no effect as hadoop was going through default endpoint s3.amazonaws.com. I'm on 
2.7.3.
However it turns out that using default endpoint actually works for buckets 
hosted in eu-west-1. And authentication succeeds for them.

Going through region specific endpoint s3.eu-west-1.amazonaws.com fails with 403



> S3A authentication failure for regions other than us-east-1
> ---
>
> Key: HADOOP-14620
> URL: https://issues.apache.org/jira/browse/HADOOP-14620
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ilya Fourmanov
> Attachments: s3-403.txt
>
>
> hadoop fs s3a:// operations fail authentication for s3 buckets hosted in 
> regions other than default us-east-1
> Steps to reproduce:
> # create s3 bucket in eu-west-1
> # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run 
> following command:
> {code}
> hadoop --loglevel DEBUG  -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com  -ls  
> s3a://your-eu-west-1-hosted-bucket/ 
> {code}
> Expected behaviour:
> You will see listing of the bucket
> Actual behaviour:
> You will get 403 Authentication Denied response for AWS S3.
> Reason is mismatch in string to sign as defined in 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html 
> provided by hadoop and expected by AWS. 
> If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes 
> returned by AWS, you will see that AWS expects CanonicalizedResource to be in 
> form  
> /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/.
> Hadoop provides it as /your-eu-west-1-hosted-bucket/
> Note that AWS documentation doesn't explicitly state that endpoint or full 
> dns address should be appended to CanonicalizedResource however practice 
> shows it is actually required.
> I've also submitted this to AWS for them to correct behaviour or 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14622) Test failure in TestFilterFileSystem and TestHarFileSystem

2017-07-05 Thread Jichao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jichao Zhang resolved HADOOP-14622.
---
Resolution: Fixed

> Test failure in TestFilterFileSystem and TestHarFileSystem
> --
>
> Key: HADOOP-14622
> URL: https://issues.apache.org/jira/browse/HADOOP-14622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
>Reporter: Jichao Zhang
>Priority: Trivial
>
> Root Cause:
> Maybe a regression issue introduced by HADOOP-14395. In HADOOP-14395, new 
> method appendFile was added into FileSystem, but didn't update related unit 
> tests in TestHarFileSystem and TestFilterFileSystem.
> Errors:
> 1. org.apache.hadoop.fs.TestHarFileSystem-output.txt
>  checkInvalidPath: har://127.0.0.1/foo.har
>   2017-07-03 13:37:08,191 ERROR fs.TestHarFileSystem 
> (TestHarFileSystem.java:testInheritedMethodsImplemented(365)) - HarFileSystem 
> MUST implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> 2. org.apache.hadoop.fs.TestFilterFileSystem-output.txt
> 2017-07-03 13:36:18,217 ERROR fs.FileSystem 
> (TestFilterFileSystem.java:testFilterFileSystem(161)) - FilterFileSystem MUST 
> implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> ~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14622) Test failure in TestFilterFileSystem and TestHarFileSystem

2017-07-05 Thread Jichao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074544#comment-16074544
 ] 

Jichao Zhang commented on HADOOP-14622:
---

Hi [~Hongyuan Li] Thank you very much for your info. I will duplicate it to 
HADOOP-14538. 

> Test failure in TestFilterFileSystem and TestHarFileSystem
> --
>
> Key: HADOOP-14622
> URL: https://issues.apache.org/jira/browse/HADOOP-14622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
>Reporter: Jichao Zhang
>Priority: Trivial
>
> Root Cause:
> Maybe a regression issue introduced by HADOOP-14395. In HADOOP-14395, new 
> method appendFile was added into FileSystem, but didn't update related unit 
> tests in TestHarFileSystem and TestFilterFileSystem.
> Errors:
> 1. org.apache.hadoop.fs.TestHarFileSystem-output.txt
>  checkInvalidPath: har://127.0.0.1/foo.har
>   2017-07-03 13:37:08,191 ERROR fs.TestHarFileSystem 
> (TestHarFileSystem.java:testInheritedMethodsImplemented(365)) - HarFileSystem 
> MUST implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> 2. org.apache.hadoop.fs.TestFilterFileSystem-output.txt
> 2017-07-03 13:36:18,217 ERROR fs.FileSystem 
> (TestFilterFileSystem.java:testFilterFileSystem(161)) - FilterFileSystem MUST 
> implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> ~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074526#comment-16074526
 ] 

Hongyuan Li commented on HADOOP-1:
--

i have implemented it with some necessary feature.





在 "Steve Loughran (JIRA)" ,2017年7月5日 18:04写道:





    [ 
[1]https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074523#comment-16074523
 ] 

Steve Loughran commented on HADOOP-[2]1:
-

I am watching this, but not putting any effort into looking at the code right 
now. Happy that the two of you are working together to come up with something 
which addresses your needs.

# You don't need to have every feature in immediately, have one up to the level 
where it works slightly better than the current one, enough for it to be 
alongside the older version for one release, then cut the other version once 
stable (s3a, wasb, ADL, all have a one-release-to-stabilise experience).
# regarding caching, I'd go for a name like {{[3]fs.ftp.cache.host}}, with the 
host value coming last. Otherwise you get into trouble with other options in 
future if a hostname matches it.

Now, a quick scan through the latest patch



h2. Build

* all settings for things like java versions, artifact versions should be 
picked up from the base hadoop-project/[4]pom.xml ... we need to manage 
everything in one place

h2. Tests

I like the tests; these are a key part of any new feature

* Use {{GenericTestUtils}} to work with logs; there's ongoing changes there for 
better SLF4J integration & log capture. Please avoid using log4j API calls 
direct
* Add a test timeout rule to {{TestAbstractFTPFileSystem}}, name it 
{{AbstractFTPFileSystemTest}}. 
* Every test suite starting Test* should be able to be executed by 
yetus/jenkins, without any ftp server
* Everything with Test* can be started without any endpoint configured, right?
* Use {{ContractTestUtils}} to work with filesystems and assert about them 
(more diags on failure), especially for the {{assertPathExists() kind of 
assertion, which yuio can move to for things like testFileExists()}}
* and use SLF4J logging, not {{[5]System.err}}
* All assertTrue/assertFalse asserts should have a meaningful string, ideally 
even assertEquals. One trick: have the toString() value of the fs provide some 
details on the connection, so you can include it in the asserts. Another, pull 
out things like {{assertChannelConnected()}} and have the text in one place
* {{[6]TestConnectionPool.testGetChannelFromClosedFS}}. If the unexpected IOE 
is caught, make it the inner cause of the AssertionError raised. 
* Lot of duplication in the contract test createContract() calls...could that 
be shared somehow?
* Have some isolated tests for the cache





--
This message was sent by Atlassian JIRA
(v[14]6.4.14#64029)


[1] 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074523#comment-16074523
[2] tel:1
[3] http://fs.ftp.cache.host
[4] http://pom.xml
[5] http://System.err
[6] http://TestConnectionPool.testGetChannelFromClosedFS
[7] tel:1
[8] https://issues.apache.org/jira/browse/HADOOP-1
[9] http://HADOOP-1.2.patch
[10] http://HADOOP-1.3.patch
[11] http://HADOOP-1.4.patch
[12] http://HADOOP-1.5.patch
[13] http://HADOOP-1.patch
[14] tel:641464029


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of 

[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074523#comment-16074523
 ] 

Steve Loughran commented on HADOOP-1:
-

I am watching this, but not putting any effort into looking at the code right 
now. Happy that the two of you are working together to come up with something 
which addresses your needs.

# You don't need to have every feature in immediately, have one up to the level 
where it works slightly better than the current one, enough for it to be 
alongside the older version for one release, then cut the other version once 
stable (s3a, wasb, ADL, all have a one-release-to-stabilise experience).
# regarding caching, I'd go for a name like {{fs.ftp.cache.host}}, with the 
host value coming last. Otherwise you get into trouble with other options in 
future if a hostname matches it.

Now, a quick scan through the latest patch



h2. Build

* all settings for things like java versions, artifact versions should be 
picked up from the base hadoop-project/pom.xml ... we need to manage everything 
in one place

h2. Tests

I like the tests; these are a key part of any new feature

* Use {{GenericTestUtils}} to work with logs; there's ongoing changes there for 
better SLF4J integration & log capture. Please avoid using log4j API calls 
direct
* Add a test timeout rule to {{TestAbstractFTPFileSystem}}, name it 
{{AbstractFTPFileSystemTest}}. 
* Every test suite starting Test* should be able to be executed by 
yetus/jenkins, without any ftp server
* Everything with Test* can be started without any endpoint configured, right?
* Use {{ContractTestUtils}} to work with filesystems and assert about them 
(more diags on failure), especially for the {{assertPathExists() kind of 
assertion, which yuio can move to for things like testFileExists()}}
* and use SLF4J logging, not {{System.err}}
* All assertTrue/assertFalse asserts should have a meaningful string, ideally 
even assertEquals. One trick: have the toString() value of the fs provide some 
details on the connection, so you can include it in the asserts. Another, pull 
out things like {{assertChannelConnected()}} and have the text in one place
* {{TestConnectionPool.testGetChannelFromClosedFS}}. If the unexpected IOE is 
caught, make it the inner cause of the AssertionError raised. 
* Lot of duplication in the contract test createContract() calls...could that 
be shared somehow?
* Have some isolated tests for the cache


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14624:
---
Status: Patch Available  (was: Open)

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14624:
---
Attachment: HADOOP-14624.001.patch

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14624.001.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Attachment: HADOOP-14587.007.patch

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Open  (was: Patch Available)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Attachment: (was: HADOOP-14587.007.patch)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Patch Available  (was: Open)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074508#comment-16074508
 ] 

ASF GitHub Bot commented on HADOOP-14624:
-

GitHub user wenxinhe opened a pull request:

https://github.com/apache/hadoop/pull/245

HADOOP-14624. Add GenericTestUtils.DelayAnswer that accept slf4j logger API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wenxinhe/hadoop 
HADOOP-14624.GenericTestUtils.DelayAnswer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/245.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #245


commit a128c8c8afc8989972f05e131c002d6d31f0cd83
Author: wenxinhe 
Date:   2017-07-05T09:31:43Z

HADOOP-14624. Add GenericTestUtils.DelayAnswer that accept slf4j logger API




> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14620) S3A authentication failure for regions other than us-east-1

2017-07-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074495#comment-16074495
 ] 

Steve Loughran commented on HADOOP-14620:
-

the sole difference is that in the second case the per-bucket option is being 
copied in on top of  the default fs.s3a.endpoint option...we've added that 
precisely so you can define things like different endpoints for different 
buckets. The default endpoint value in {{fs.s3a.endpoint}} is the one which 
gets used when there isn't a per bucket override going on,

If you've got the time, stepping through what's going on in S3A would be 
useful. I suspect maybe there's a default value somewhere in your site configs, 
or indeed, the core-default one, which is not letting the one you've set on the 
classpath through. Of course, you know have an immediate fix to your problem...




> S3A authentication failure for regions other than us-east-1
> ---
>
> Key: HADOOP-14620
> URL: https://issues.apache.org/jira/browse/HADOOP-14620
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ilya Fourmanov
> Attachments: s3-403.txt
>
>
> hadoop fs s3a:// operations fail authentication for s3 buckets hosted in 
> regions other than default us-east-1
> Steps to reproduce:
> # create s3 bucket in eu-west-1
> # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run 
> following command:
> {code}
> hadoop --loglevel DEBUG  -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com  -ls  
> s3a://your-eu-west-1-hosted-bucket/ 
> {code}
> Expected behaviour:
> You will see listing of the bucket
> Actual behaviour:
> You will get 403 Authentication Denied response for AWS S3.
> Reason is mismatch in string to sign as defined in 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html 
> provided by hadoop and expected by AWS. 
> If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes 
> returned by AWS, you will see that AWS expects CanonicalizedResource to be in 
> form  
> /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/.
> Hadoop provides it as /your-eu-west-1-hosted-bucket/
> Note that AWS documentation doesn't explicitly state that endpoint or full 
> dns address should be appended to CanonicalizedResource however practice 
> shows it is actually required.
> I've also submitted this to AWS for them to correct behaviour or 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14625) error message in S3AUtils.getServerSideEncryptionKey() needs to expand property constant

2017-07-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14625:
---

 Summary: error message in S3AUtils.getServerSideEncryptionKey() 
needs to expand property constant
 Key: HADOOP-14625
 URL: https://issues.apache.org/jira/browse/HADOOP-14625
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Trivial


The error message in {{getServerSideEncryptionKey}} says that the property 
isn't valid, but it doesn't actually expand the constant defining its name:

{code}
LOG.error("Cannot retrieve SERVER_SIDE_ENCRYPTION_KEY", e);
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074478#comment-16074478
 ] 

Hongyuan Li commented on HADOOP-1:
--

1) ftp is very diffrent from sftp. sftp relies on ssh protocol, which means it 
can get more accurate inf than ftp. FTP allows more connections than sftp.
2) i read your code just because i just planned to implement it when i have 
time.

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-07-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13743:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14552

> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074462#comment-16074462
 ] 

Lukas Waldmann commented on HADOOP-1:
-

1) seek is necessary for interrupted transfers and it doesn't disconnect in 
sense of interrupting tcp connection and logging again. In close exiting data 
stream, while keeping control connection, and open it again in correct 
position. Of course it's not as effective as local fs but it does the job.
2) I beg to differ. Semantics of both protocols are the same. How data are 
actually transferred is only an implementation detail. You can see the common 
package actually contains majority of the code
3)  will try to pit it in
4) no - connection is returned to the pool and reused. For sftp it is actually 
greatest time saver as initiation of ssh connection is really expensive
5) Thank you. It's always nice to see somebody is interested in work you have 
done. And so far you are the only one having look at the code. :)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14620) S3A authentication failure for regions other than us-east-1

2017-07-05 Thread Ilya Fourmanov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074461#comment-16074461
 ] 

Ilya Fourmanov commented on HADOOP-14620:
-

That's extremely interesting.
So 
{code}
hadoop  fs -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com -ls 
s3a://dshbasebackup/
{code}
fails for me with 403 as described above

however if I use format as proposed by [~ste...@apache.org]
{code}
hadoop  fs -D fs.s3a.bucket.dshbasebackup.endpoint=s3.eu-west-1.amazonaws.com 
-ls s3a://dshbasebackup/
{code}
it works as expected. Now, what's the difference between those 2 formats? 


> S3A authentication failure for regions other than us-east-1
> ---
>
> Key: HADOOP-14620
> URL: https://issues.apache.org/jira/browse/HADOOP-14620
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Ilya Fourmanov
> Attachments: s3-403.txt
>
>
> hadoop fs s3a:// operations fail authentication for s3 buckets hosted in 
> regions other than default us-east-1
> Steps to reproduce:
> # create s3 bucket in eu-west-1
> # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run 
> following command:
> {code}
> hadoop --loglevel DEBUG  -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com  -ls  
> s3a://your-eu-west-1-hosted-bucket/ 
> {code}
> Expected behaviour:
> You will see listing of the bucket
> Actual behaviour:
> You will get 403 Authentication Denied response for AWS S3.
> Reason is mismatch in string to sign as defined in 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html 
> provided by hadoop and expected by AWS. 
> If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes 
> returned by AWS, you will see that AWS expects CanonicalizedResource to be in 
> form  
> /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/.
> Hadoop provides it as /your-eu-west-1-hosted-bucket/
> Note that AWS documentation doesn't explicitly state that endpoint or full 
> dns address should be appended to CanonicalizedResource however practice 
> shows it is actually required.
> I've also submitted this to AWS for them to correct behaviour or 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set acks to 1,not 0

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Summary: KafkaSink#init should set acks to 1,not 0  (was: KafkaSink#init 
should set ack to 1)

> KafkaSink#init should set acks to 1,not 0
> -
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074439#comment-16074439
 ] 

Hadoop QA commented on HADOOP-14623:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-kafka in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875716/HADOOP-14623-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux adb90f1ba96d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b17e655 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12718/testReport/ |
| modules | C: hadoop-tools/hadoop-kafka U: hadoop-tools/hadoop-kafka |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12718/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Patch Available  (was: Open)

https://builds.apache.org/job/PreCommit-HADOOP-Build/ doesn't work, try to kick 
again.

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common

2017-07-05 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14587:
---
Status: Open  (was: Patch Available)

> Use GenericTestUtils.setLogLevel when available in hadoop-common
> 
>
> Key: HADOOP-14587
> URL: https://issues.apache.org/jira/browse/HADOOP-14587
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
> Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, 
> HADOOP-14587.003.patch, HADOOP-14587.004.patch, HADOOP-14587.005.patch, 
> HADOOP-14587.006.patch, HADOOP-14587.007.patch
>
>
> Based on Brahma's comment in HADOOP-14296, it's better to use 
> GenericTestUtils.setLogLevel as possible to make the migration easier.
> Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for 
> hadoop-common change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Status: Patch Available  (was: Open)

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2017-07-05 Thread Wenxin He (JIRA)
Wenxin He created HADOOP-14624:
--

 Summary: Add GenericTestUtils.DelayAnswer that accept slf4j logger 
API
 Key: HADOOP-14624
 URL: https://issues.apache.org/jira/browse/HADOOP-14624
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Wenxin He
Assignee: Wenxin He


Split from HADOOP-14539.
Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
we are migrating the APIs to slf4j, slf4j logger API should be accepted as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074394#comment-16074394
 ] 

Hongyuan Li edited comment on HADOOP-14623 at 7/5/17 8:07 AM:
--

ping [~ajisakaa] 、 [~jojochuang] for code review.


was (Author: hongyuan li):
ping [~ajisakaa] [~jojochuang] for code review.

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074394#comment-16074394
 ] 

Hongyuan Li commented on HADOOP-14623:
--

ping [~ajisakaa] [~jojochuang] for code review.

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Description: 
{{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has been 
written to the broker at least.

current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}

  was:
{{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
written to the broker at least.
current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}


> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Component/s: tools
 common

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
> written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Affects Version/s: 3.0.0-alpha3

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
> written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-07-05 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074379#comment-16074379
 ] 

Ewan Higgs commented on HADOOP-13786:
-

[~ste...@apache.org], I took a look at your Github branch but it's interleaved 
with patches from different branches. This makes it hard to follow which parts 
are changing for HADOOP-13786. If you could, please upload an up to date patch 
and I'll resume reviewing. Thanks!

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, 
> HADOOP-13786-HADOOP-13345-032.patch, HADOOP-13786-HADOOP-13345-033.patch, 
> objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14623:
-
Attachment: HADOOP-14623-001.patch

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14623-001.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
> written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li reassigned HADOOP-14623:


Assignee: Hongyuan Li

> KafkaSink#init should set ack to 1
> --
>
> Key: HADOOP-14623
> URL: https://issues.apache.org/jira/browse/HADOOP-14623
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>
> {{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
> written to the broker at least.
> current code list below:
> {code}
>   
> props.put("request.required.acks", "0");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14623) KafkaSink#init should set ack to 1

2017-07-05 Thread Hongyuan Li (JIRA)
Hongyuan Li created HADOOP-14623:


 Summary: KafkaSink#init should set ack to 1
 Key: HADOOP-14623
 URL: https://issues.apache.org/jira/browse/HADOOP-14623
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hongyuan Li


{{KafkaSink}}#{{init}}  should set ack to 1 to make sure the message has been 
written to the broker at least.
current code list below:

{code}
  
props.put("request.required.acks", "0");

{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14622) Test failure in TestFilterFileSystem and TestHarFileSystem

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074351#comment-16074351
 ] 

Hongyuan Li commented on HADOOP-14622:
--

{{HarFileSystem}}#{{appendFile}} has been implemented in lastest code on branch 
trunk

> Test failure in TestFilterFileSystem and TestHarFileSystem
> --
>
> Key: HADOOP-14622
> URL: https://issues.apache.org/jira/browse/HADOOP-14622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
>Reporter: Jichao Zhang
>Priority: Trivial
>
> Root Cause:
> Maybe a regression issue introduced by HADOOP-14395. In HADOOP-14395, new 
> method appendFile was added into FileSystem, but didn't update related unit 
> tests in TestHarFileSystem and TestFilterFileSystem.
> Errors:
> 1. org.apache.hadoop.fs.TestHarFileSystem-output.txt
>  checkInvalidPath: har://127.0.0.1/foo.har
>   2017-07-03 13:37:08,191 ERROR fs.TestHarFileSystem 
> (TestHarFileSystem.java:testInheritedMethodsImplemented(365)) - HarFileSystem 
> MUST implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> 2. org.apache.hadoop.fs.TestFilterFileSystem-output.txt
> 2017-07-03 13:36:18,217 ERROR fs.FileSystem 
> (TestFilterFileSystem.java:testFilterFileSystem(161)) - FilterFileSystem MUST 
> implement protected org.apache.hadoop.fs.FSDataOutputStreamBuilder 
> org.apache.hadoop.fs.FileSystem.appendFile(org.apache.hadoop.fs.Path)
> ~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-07-05 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074285#comment-16074285
 ] 

Hongyuan Li commented on HADOOP-1:
--

1、seek cause the client disconnect and connect again, don't think it as a good 
idea to implment it.
2、{{AbstractFTPFileSystem}}  means Abstract base for FTP like FileSystems. 
Sorry to interrupt you, the ftp protocol is not like sftp protocol any 
little.the common between the two is that they use username and password to 
connect to the ftp/sftp server, then doing a lot of ops.Suggest to use another 
name.
3、about the passwd and user
code like below:
{{sftpFile}} is a LsEntry instance.
{code}
{
String longName = sftpFile.getLongname();
String[] splitLongName = longName.split(" ");
String user = getUserOrGroup("user", splitLongName);
String group = getUserOrGroup("group", splitLongName);
 }

  private String getUserOrGroup(String flag, String[] splitLongName) {

int count = 0;
int desPos = getPos(flag);
for (String element : splitLongName) {

  if (count == desPos && !"".equals(element)) {
return element;
  }
  if (!"".equals(element))
count++;
}
return null;
  }

  /**
   * generate the pos
   *
   * @param flag
   * @return
   */
  private int getPos(String flag) {

if ("user".equals(flag)) {
  return 2;
} else {
  return 3;
}
  }

   
{code}

4、{{SFTPChannel}}#{{close}} should close the session as well ?  
{code}
client.getSession().disconnect();
{code}

5、i don't know if i can be seen as a reviewer. I'm just interested in your 
implements,
Good job. :D 

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org