[jira] [Assigned] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-10 Thread Kanwaljeet Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanwaljeet Sachdev reassigned HADOOP-15457:
---

Assignee: (was: Kanwaljeet Sachdev)
Target Version/s:   (was: 3.2.0)
 Component/s: (was: yarn)
 Key: HADOOP-15457  (was: YARN-8198)
 Project: Hadoop Common  (was: Hadoop YARN)

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: YARN-8198.001.patch, YARN-8198.002.patch, 
> YARN-8198.003.patch, YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-15455:
--

Assignee: Yuen-Kuei Hsueh

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471472#comment-16471472
 ] 

Akira Ajisaka commented on HADOOP-15455:


Assigned. Thanks

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-05-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471330#comment-16471330
 ] 

Wangda Tan commented on HADOOP-15392:
-

Since this is marked as 3.1.1 blocker issue which we plan to release soon. 
[~mackrorysd], [~fabbri], [~Krizek], Could you update what is the status of 
this Jira?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15398) StagingTestBase uses methods not available in Mockito 1.8.5

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471313#comment-16471313
 ] 

genericqa commented on HADOOP-15398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
86m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 32m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 21m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 35s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}330m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainersLauncher 
|
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922891/HADOOP-15398.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 49a60da98b78 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7482963 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14610/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14610/testReport/ |
| Max. process+thread count | 3418 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471277#comment-16471277
 ] 

genericqa commented on HADOOP-15450:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  2m 
30s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 28 unchanged - 0 fixed = 35 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922905/HADOOP-15450.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b1d318cb92ef 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 48d0b54 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14611/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14611/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 

[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471238#comment-16471238
 ] 

ASF GitHub Bot commented on HADOOP-15455:
-

Github user phstudy commented on the issue:

https://github.com/apache/hadoop/pull/385
  
@jojochuang My Apache Jira username is `study`. Thanks.


> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15455:
-
Status: Patch Available  (was: Open)

Submit the patch for precommit check. 

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471203#comment-16471203
 ] 

ASF GitHub Bot commented on HADOOP-15455:
-

Github user jojochuang commented on the issue:

https://github.com/apache/hadoop/pull/385
  
+1
it appears you don't have a Apache Jira account. If you have one, let me 
know and I can assign the jira to you.


> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471047#comment-16471047
 ] 

Arpit Agarwal commented on HADOOP-15450:


v2 patch refactors tests since TestBasicDiskValidator derives TestDiskChecker. 
Moved subset of tests to a separate class.

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15450:
---
Attachment: HADOOP-15450.02.patch

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch, HADOOP-15450.02.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471012#comment-16471012
 ] 

Rushabh S Shah commented on HADOOP-15441:
-

{quote}Since log message is parametrized it will not be evaluated if logging 
level is not DEBUG. 
{quote}
Thanks [~ajayydv] !

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15450) Avoid fsync storm triggered by DiskChecker and handle disk full situation

2018-05-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471005#comment-16471005
 ] 

Ajay Kumar commented on HADOOP-15450:
-

[~arpitagarwal] thanks for submitting the patch. test failure in 
TestDiskChecker is related as we have changed method signature in 
{{TestDiskChecker#checkDisks}}. +1 with that addressed.

> Avoid fsync storm triggered by DiskChecker and handle disk full situation
> -
>
> Key: HADOOP-15450
> URL: https://issues.apache.org/jira/browse/HADOOP-15450
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Arpit Agarwal
>Priority: Blocker
> Attachments: HADOOP-15450.01.patch
>
>
> Fix disk checker issues reported by [~kihwal] in HADOOP-13738
> There are non-hdfs users of DiskChecker, who use it proactively, not just on 
> failures. This was fine before, but now it incurs heavy I/O due to 
> introduction of fsync() in the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470960#comment-16470960
 ] 

Ajay Kumar commented on HADOOP-15441:
-

Since log message is parametrized it will not be evaluated if logging level is 
not DEBUG. 
https://www.slf4j.org/faq.html#logging_performance 

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15398) StagingTestBase uses methods not available in Mockito 1.8.5

2018-05-10 Thread Mohammad Arshad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Arshad updated HADOOP-15398:
-
Attachment: HADOOP-15398.002.patch

> StagingTestBase uses methods not available in Mockito 1.8.5
> ---
>
> Key: HADOOP-15398
> URL: https://issues.apache.org/jira/browse/HADOOP-15398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
> Attachments: HADOOP-15398.001.patch, HADOOP-15398.002.patch
>
>
> *Problem:* hadoop trunk compilation is failing
>  *Root Cause:*
>  compilation error is coming from 
> {{org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase}}. Compilation 
> error is "The method getArgumentAt(int, Class) is 
> undefined for the type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> *Expectations:*
>  Either mockito-all version to be upgraded or test case to be written only 
> with available functions in 1.8.5.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470906#comment-16470906
 ] 

ASF GitHub Bot commented on HADOOP-15455:
-

GitHub user phstudy opened a pull request:

https://github.com/apache/hadoop/pull/385

HADOOP-15455. Incorrect debug message in KMSACL#hasAccess

Fix debug message.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/phstudy/hadoop patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #385


commit a7a0aa7aef2c47f289fd8c5b7c6a5e4c6cecf1c0
Author: Yuen-Kuei Hsueh 
Date:   2018-05-10T18:23:41Z

HADOOP-15455. Incorrect debug message in KMSACL#hasAccess

Fix debug message.




> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2018-05-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-15446:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolving this. Please re-open if you need to brackport to a different branch

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2018-05-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-15446:
-
Fix Version/s: 3.2.0
   2.10.0

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15446) WASB: PageBlobInputStream.skip breaks HBASE replication

2018-05-10 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470848#comment-16470848
 ] 

Arun Suresh commented on HADOOP-15446:
--

Committed this to branch-2 and branch-2.9 as well (I didn't see a difference in 
the code between branch-2 and trunk - so, instead of applying the provided 
patch, I just cherry-picked the trunk commit and ran the tests - which passed)

> WASB: PageBlobInputStream.skip breaks HBASE replication
> ---
>
> Key: HADOOP-15446
> URL: https://issues.apache.org/jira/browse/HADOOP-15446
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.2
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
>Priority: Major
> Fix For: 2.10.0, 3.2.0
>
> Attachments: HADOOP-15446-001.patch, HADOOP-15446-002.patch, 
> HADOOP-15446-003.patch, HADOOP-15446-branch-2.001.patch
>
>
> Page Blobs are primarily used by HBASE.  HBASE replication, which apparently 
> has not been used with WASB until recently, performs non-sequential reads on 
> log files using PageBlobInputStream.  There are bugs in this stream 
> implementation which prevent skip and seek from working properly, and 
> eventually the stream state becomes corrupt and unusable.
> I believe this bug affects all releases of WASB/HADOOP.  It appears to be a 
> day-0 bug in PageBlobInputStream.  There were similar bugs opened in the past 
> (HADOOP-15042) but the issue was not properly fixed, and no test coverage was 
> added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470770#comment-16470770
 ] 

Hudson commented on HADOOP-15454:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14159 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14159/])
HADOOP-15454. TestRollingFileSystemSinkWithLocal fails on Windows. (inigoiri: 
rev 1da8d4190d6e574347ab9d3380513e9401569573)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/sink/TestRollingFileSystemSinkWithLocal.java


> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15454-branch-2.000.patch, HADOOP-15454.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15454:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks you [~surmountian] for the patches.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-15454-branch-2.000.patch, HADOOP-15454.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470685#comment-16470685
 ] 

Íñigo Goiri commented on HADOOP-15454:
--

Not sure what the first Yetus run was.
 [^HADOOP-15454.000.patch] looks good to me.
The latest windows 
[report|https://builds.apache.org/job/hadoop-trunk-win/460/testReport/org.apache.hadoop.metrics2.sink/TestRollingFileSystemSinkWithLocal/]
 also shows 6 failures.
They should go away with these two.
+1
Committing all the way to branch-2.9.

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15454-branch-2.000.patch, HADOOP-15454.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470633#comment-16470633
 ] 

Ajay Kumar commented on HADOOP-15456:
-

Attached a tar file which contains initial Dockerfile and scripts to create 
base image. We have [HDDS-10] for corresponding compose file. This jira is 
created to create image in official docker repo. Once it is done we will 
replace image mentioned in compose file with official image.

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15456:

Description: Create docker image to run secure ozone cluster.

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15456:

Attachment: secure-ozone.tar

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: secure-ozone.tar
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15456:

Attachment: HADOOP-15456-docker-hadoop-runner.00.patch

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15456:

Attachment: (was: HADOOP-15456-docker-hadoop-runner.00.patch)

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470599#comment-16470599
 ] 

Rushabh S Shah commented on HADOOP-15441:
-

Sorry for not committing yesterday.
bq. Since we are using slf4j, we don't really need if(LOG.isDebugEnabled()).
[~xyao]: can you please elaborate why {{isDebugEnabled}} check is not enabled ?

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-10 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-15456:
---

 Summary: create base image for running secure ozone cluster
 Key: HADOOP-15456
 URL: https://issues.apache.org/jira/browse/HADOOP-15456
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ajay Kumar
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470466#comment-16470466
 ] 

genericqa commented on HADOOP-1:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 19s{color} | {color:orange} root: The patch generated 8 new + 7 unchanged - 
0 fixed = 15 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
27s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 39s{color} 
| {color:red} hadoop-ftp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 19s{color} 
| {color:red} hadoop-tools in the patch failed. {color} |
| 

[jira] [Updated] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15455:
-
Description: 
If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
else, it prints "user is in foo bar"

{code:title=KMSACLs#hasAccess()}
if (access) {
  AccessControlList blacklist = blacklistedAcls.get(type);
  access = (blacklist == null) || !blacklist.isUserInList(ugi);
  if (LOG.isDebugEnabled()) {
if (blacklist == null) {
  LOG.debug("No blacklist for {}", type.toString());
} else if (access) {
  LOG.debug("user is in {}" , blacklist.getAclString());
} else {
  LOG.debug("user is not in {}" , blacklist.getAclString());
}
  }
}
{code}

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-10 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15455:


 Summary: Incorrect debug message in KMSACL#hasAccess
 Key: HADOOP-15455
 URL: https://issues.apache.org/jira/browse/HADOOP-15455
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470300#comment-16470300
 ] 

genericqa commented on HADOOP-15454:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15454 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922813/HADOOP-15454.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0f01bb23e5c8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba051b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14608/testReport/ |
| Max. process+thread count | 1528 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14608/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
>  

[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2018-05-10 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Attachment: HADOOP-1.15.patch

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
>Priority: Major
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.14.patch, 
> HADOOP-1.15.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
>  * Support for HTTP/SOCKS proxies
>  * Support for passive FTP
>  * Support for explicit FTPS (SSL/TLS)
>  * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
>  For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
>  * Caching of directory trees. For ftp you always need to list whole 
> directory whenever you ask information about particular file.
>  Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
>  * Support of keep alive (NOOP) messages to avoid connection drops
>  * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
>  * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often
>  * Support for sftp private keys (including pass phrase)
>  * Support for keeping passwords, private keys and pass phrase in the jceks 
> key stores



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2018-05-10 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Attachment: (was: HADOOP-1.15.patch)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
>Priority: Major
> Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, 
> HADOOP-1.12.patch, HADOOP-1.13.patch, HADOOP-1.14.patch, 
> HADOOP-1.15.patch, HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
>  * Support for HTTP/SOCKS proxies
>  * Support for passive FTP
>  * Support for explicit FTPS (SSL/TLS)
>  * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
>  For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
>  * Caching of directory trees. For ftp you always need to list whole 
> directory whenever you ask information about particular file.
>  Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
>  * Support of keep alive (NOOP) messages to avoid connection drops
>  * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
>  * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often
>  * Support for sftp private keys (including pass phrase)
>  * Support for keeping passwords, private keys and pass phrase in the jceks 
> key stores



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470181#comment-16470181
 ] 

genericqa commented on HADOOP-15454:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
12s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-common in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
10s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-common in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-15454 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922813/HADOOP-15454.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c323ce3c4e1b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba051b0 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14607/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14607/artifact/out/branch-compile-root.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14607/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14607/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common.txt
 |
| mvninstall | 

[jira] [Commented] (HADOOP-15354) hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided

2018-05-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470172#comment-16470172
 ] 

Hudson commented on HADOOP-15354:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14157 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14157/])
HADOOP-15354. hadoop-aliyun & hadoop-azure modules to mark hadoop-common 
(aajisaka: rev ba051b0686ad5190b43ce64552de5f14d1b1461d)
* (edit) hadoop-tools/hadoop-aliyun/pom.xml
* (edit) hadoop-tools/hadoop-azure/pom.xml


> hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided
> --
>
> Key: HADOOP-15354
> URL: https://issues.apache.org/jira/browse/HADOOP-15354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/azure, fs/oss
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15351-branch-3.1.001.patch
>
>
> Although the aws/openstack and adl modules now declare hadopo-common as 
> "provided" the hadoop-aliyun and hadoop-azure modules don't, so it gets into 
> the set of dependencies passed on through hadoop-cloud-storage. It should be 
> switched to provided in the POMs of these modules



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16470169#comment-16470169
 ] 

Xiao Liang commented on HADOOP-15454:
-

Current test result on Windows(before patch):

{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Failures:{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testSilentFailedWrite:145 An exception was 
generated while writing metrics when the target directory was not writable, 
even though the sink is set to ignore errors{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testExistingWrite:67->RollingFileSystemSinkTestBase.doAppendTest:389->RollingFileSystemSinkTestBase.preCreateLogFile:412->RollingFileSystemSinkTestBase.preCreateLogFile:434
 » URISyntax{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testExistingWrite2:81->RollingFileSystemSinkTestBase.preCreateLogFile:434
 » URISyntax{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testSilentExistingWrite:96->RollingFileSystemSinkTestBase.doAppendTest:389->RollingFileSystemSinkTestBase.preCreateLogFile:412->RollingFileSystemSinkTestBase.preCreateLogFile:434
 » URISyntax{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testSilentWrite:55->RollingFileSystemSinkTestBase.doWriteTest:231->RollingFileSystemSinkTestBase.readLogFile:250
 » URISyntax{color}
{color:#d04437}[ERROR] 
TestRollingFileSystemSinkWithLocal.testWrite:42->RollingFileSystemSinkTestBase.doWriteTest:231->RollingFileSystemSinkTestBase.readLogFile:250
 » URISyntax{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 7, Failures: 1, Errors: 5, Skipped: 0{color}

With the patch:

{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] T E S T S{color}
{color:#14892c}[INFO] 
---{color}
{color:#14892c}[INFO] Running 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal{color}
{color:#14892c}[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 22.032 s - in 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Results:{color}
{color:#14892c}[INFO]{color}
{color:#14892c}[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0{color}

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15454-branch-2.000.patch, HADOOP-15454.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15454:

Attachment: HADOOP-15454.000.patch

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15454-branch-2.000.patch, HADOOP-15454.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15454:

Status: Patch Available  (was: Open)

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15454-branch-2.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15454:

Attachment: HADOOP-15454-branch-2.000.patch

> TestRollingFileSystemSinkWithLocal fails on Windows
> ---
>
> Key: HADOOP-15454
> URL: https://issues.apache.org/jira/browse/HADOOP-15454
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15454-branch-2.000.patch
>
>
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
> Windows,
> *Error message:*
> Illegal character in opaque part at index 2: 
> D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite
> *Stack trace:*
> java.io.IOException: All datanodes 
> [DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
>  are bad. Aborting... at 
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)
> Drive letter in absolute path in Windows will fail the underlying URI check.
> Another issue is that for the failed write test case 
> java.io.File#setWritable(boolean, boolean) is used, which does not work as 
> expected on Windows and should be replaced by 
> org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15454) TestRollingFileSystemSinkWithLocal fails on Windows

2018-05-10 Thread Xiao Liang (JIRA)
Xiao Liang created HADOOP-15454:
---

 Summary: TestRollingFileSystemSinkWithLocal fails on Windows
 Key: HADOOP-15454
 URL: https://issues.apache.org/jira/browse/HADOOP-15454
 Project: Hadoop Common
  Issue Type: Test
Reporter: Xiao Liang
Assignee: Xiao Liang


org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal fails on 
Windows,

*Error message:*

Illegal character in opaque part at index 2: 
D:\_work\8\s\hadoop-common-project\hadoop-common\target\test\data\4\RollingFileSystemSinkTest\testSilentExistingWrite

*Stack trace:*

java.io.IOException: All datanodes 
[DatanodeInfoWithStorage[127.0.0.1:53235,DS-4d6119ac-31cc-4f48-8a5b-7f35b36a1c55,DISK]]
 are bad. Aborting... at 
org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1538) 
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1472)
 at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1244)
 at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:663)

Drive letter in absolute path in Windows will fail the underlying URI check.

Another issue is that for the failed write test case 
java.io.File#setWritable(boolean, boolean) is used, which does not work as 
expected on Windows and should be replaced by 
org.apache.hadoop.fs.FileUtil#setWritable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15354) hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided

2018-05-10 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15354:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~ste...@apache.org]!

> hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided
> --
>
> Key: HADOOP-15354
> URL: https://issues.apache.org/jira/browse/HADOOP-15354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/azure, fs/oss
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15351-branch-3.1.001.patch
>
>
> Although the aws/openstack and adl modules now declare hadopo-common as 
> "provided" the hadoop-aliyun and hadoop-azure modules don't, so it gets into 
> the set of dependencies passed on through hadoop-cloud-storage. It should be 
> switched to provided in the POMs of these modules



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org