[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread Wenxin He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113984#comment-16113984
 ] 

Wenxin He commented on HADOOP-14706:


Thanks for your review and commit, [~ajisakaa]. The pull request is closed.

> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14471) Upgrade Jetty to latest 9.3 version

2017-08-03 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113983#comment-16113983
 ] 

John Zhuge commented on HADOOP-14471:
-

Thanks [~steve_l] and [~ajisakaa]. Will commit tomorrow if no objection.

> Upgrade Jetty to latest 9.3 version
> ---
>
> Key: HADOOP-14471
> URL: https://issues.apache.org/jira/browse/HADOOP-14471
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14471.001.patch
>
>
> The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to 
> the latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4?
> 9.3.x changes: 
> https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt
> 9.4.x changes:
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113981#comment-16113981
 ] 

ASF GitHub Bot commented on HADOOP-14706:
-

Github user aajisaka commented on the issue:

https://github.com/apache/hadoop/pull/258
  
Thanks!


> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113977#comment-16113977
 ] 

ASF GitHub Bot commented on HADOOP-14706:
-

Github user wenxinhe closed the pull request at:

https://github.com/apache/hadoop/pull/258


> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113972#comment-16113972
 ] 

ASF GitHub Bot commented on HADOOP-14706:
-

Github user aajisaka commented on the issue:

https://github.com/apache/hadoop/pull/258
  
Hi @wenxinhe, would you close this PR?


> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14706:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~vincent he] for the contribution!

> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113966#comment-16113966
 ] 

ASF GitHub Bot commented on HADOOP-14706:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/257


> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14471) Upgrade Jetty to latest 9.3 version

2017-08-03 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113961#comment-16113961
 ] 

Akira Ajisaka commented on HADOOP-14471:


+1

> Upgrade Jetty to latest 9.3 version
> ---
>
> Key: HADOOP-14471
> URL: https://issues.apache.org/jira/browse/HADOOP-14471
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14471.001.patch
>
>
> The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to 
> the latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4?
> 9.3.x changes: 
> https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt
> 9.4.x changes:
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113949#comment-16113949
 ] 

Hadoop QA commented on HADOOP-12077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 35s{color} 
| {color:red} root generated 1 new + 1418 unchanged - 0 fixed = 1419 total (was 
1418) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 12 new + 159 unchanged 
- 5 fixed = 171 total (was 164) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-12077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880323/HADOOP-12077.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e1654039e1a9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12949/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| javac | 

[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113931#comment-16113931
 ] 

Xiao Chen commented on HADOOP-14727:


I also looked into the local repro when backing out the 4 mentioned jiras.
This is how that same {{BlockReaderRemote}} created from the above is closed: 
{noformat}
2017-08-03 21:10:53,052 INFO 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote:  closing 
blockreaderremote org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@80ceea3
java.lang.Exception: 
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.close(BlockReaderRemote.java:310)
at 
org.apache.hadoop.hdfs.DFSInputStream.closeCurrentBlockReaders(DFSInputStream.java:1572)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:664)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at 
org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.close(Unknown 
Source)
at org.apache.xerces.impl.io.UTF8Reader.close(Unknown Source)
at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.skipSpaces(Unknown Source)
at 
org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(Unknown
 Source)
at 
org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown 
Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2645)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2713)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2540)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1071)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1121)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1339)
at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
{noformat}
where
{code}
2712  } else if (resource instanceof InputStream) {
2713doc = parse(builder, (InputStream) resource, null);
2714returnCachedProperties = true;
2715  } else if (resource instanceof Properties) {
{code}

Although I naturally feel the same with what patch 1 does, it seems the 
existing behavior is to close regardlessly.

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf

[jira] [Commented] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-03 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113922#comment-16113922
 ] 

Akira Ajisaka commented on HADOOP-14628:


Umm. HDFS and YARN tests did not run. I'll try to run the tests locally.

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch, HADOOP-14628.001-tests.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113825#comment-16113825
 ] 

Hadoop QA commented on HADOOP-14727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 147 unchanged - 1 fixed = 148 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.7.0_131 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880297/HADOOP-14727.001-branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 722ab85978b7 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113801#comment-16113801
 ] 

Hadoop QA commented on HADOOP-14730:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 16s{color} 
| {color:red} hadoop-tools_hadoop-azure-datalake generated 4 new + 5 unchanged 
- 6 fixed = 9 total (was 11) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure-datalake: The patch 
generated 4 new + 16 unchanged - 4 fixed = 20 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
38s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880286/HADOOP-14730.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e3ae722c8526 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12948/artifact/patchprocess/diff-compile-javac-hadoop-tools_hadoop-azure-datalake.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12948/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12948/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12948/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: 

[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113796#comment-16113796
 ] 

Hadoop QA commented on HADOOP-14722:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 27 unchanged - 0 fixed = 29 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880328/HADOOP-14722-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 37a836c57f0a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12950/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12950/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12950/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, 

[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113786#comment-16113786
 ] 

Hadoop QA commented on HADOOP-14715:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880267/HADOOP-14715-001.patch
 |
| Optional Tests |  asflicense  unit  xml  |
| uname | Linux 1fa53dd9f117 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12946/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12946/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Shane Mainali (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113776#comment-16113776
 ] 

Shane Mainali commented on HADOOP-14722:


Thanks [~tmarquardt] for taking care of feedback and adding the additional 
tests, it's good they caught something. I think this change is in a good state 
now, +1 from my side.

> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch, 
> HADOOP-14722-003.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-03 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Attachment: HADOOP-14627.002.patch

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627.002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-03 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Attachment: (was: HADOOP-14627-002.patch)

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627.002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-03 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113774#comment-16113774
 ] 

Atul Sikaria commented on HADOOP-14627:
---

[~jzhuge], [~steve_l]: addressed all your concerns in the comments above. 

Also updating patch to not be dependent on the preview version of SDK (since 
released version 2.2.1 is now available).

+[~chris.douglas] as FYI

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627-002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-03 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Attachment: HADOOP-14627-002.patch

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627-002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-03 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated HADOOP-14627:
--
Status: Patch Available  (was: Open)

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch, HADOOP-14627-002.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113772#comment-16113772
 ] 

Hadoop QA commented on HADOOP-14731:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880312/HADOOP-14731.001.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux 2f6d3129b1b8 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12947/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113754#comment-16113754
 ] 

Thomas Marquardt commented on HADOOP-14722:
---

Let me also add the following responses to Esfandiar's feedback:

>>> BlockBlobInputStream.java: L92-94: streamPosition - streamBufferLength + 
>>> streamBufferPosition, can this become negative?  

It cannot be negative.

>>> BlockBlobInputStream.java: L133: don't we need to nullify streamBuffer too?

The buffer is reusable, to avoid frequent memory allocations of a large buffer. 
 I added a resetStreamBuffer function to set the position and length to zero, 
to help clarify.

>>> BlockBlobInputStream.java: L321-323: Why dont you throw the exception right 
>>> at the beginning? 

A goal of this change was to keep the existing blob input stream functionality 
up until a reverse seek operation is performed, at which point it switches to 
the new behavior.  The exception was not thrown at the beginning to reduce the 
likelihood of regression.

>>> BlockBlobInputStream.java: L314: Overall I am not a big fan of having 
>>> nested if and elses because its making code more complicated that needed. 
>>> lets just return instead of creating else.

I agree, although for this bug fix it is desirable to minimize the code change.

>>> BlockBlobInputStream.java: L330: I'd suggest create a private method which 
>>> clears the buffer and get rid of all the custom streamBufferPosition = 0; 
>>> streamBufferLength = 0 and etc.

I added the resetStreamBuffer function for this.  See my earlier comment about 
re-using the buffer.

> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch, 
> HADOOP-14722-003.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14732:
---

 Summary: ProtobufRpcEngine should use Time.monotonicNow to measure 
durations
 Key: HADOOP-14732
 URL: https://issues.apache.org/jira/browse/HADOOP-14732
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14722:
--
Attachment: HADOOP-14722-003.patch

Attaching patch HADOOP-14722-003.patch.

This addresses the feedback, fixes an issue with the BlockBlobInputStream.skip 
implementation, and adds additional test coverage so all the code paths are 
exercised for seek and skip.

All hadoop-tools/hadoop-azure *tests are passing* with this patch, except for 
TestWasbRemoteCallHelper which is a known issue tracked by HADOOP-14715.  I 
tested against my endpoint tmarql3.

> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch, 
> HADOOP-14722-003.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113738#comment-16113738
 ] 

Thomas Marquardt commented on HADOOP-14715:
---

Esfandiar, thanks for finding the regression!  In addition to fixing the 
regression so that the test passes with *fs.azure.secure.mode* set to *true* or 
*false*, we should also change the test configuration back to how it was, so 
that fs.azure.secure.mode is false (which is what Steve did in 
HADOOP-14715-001.patch).  The default should be false because that is the most 
common usage.

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12077:
---
Attachment: HADOOP-12077.007.patch

Fixed some warnings. The javac warning depends on HADOOP-13065, particularly 
HADOOP-13032.

I didn't do any testing with this patch, I only rebased it and ran unit tests. 
[~jira.shegalov], do you have cycles to verify that it works as designed?

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch, HADOOP-12077.007.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts the set of most recent URIs topologically in the same manner as 
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
> underlying destinations. With repairOnRead, Nfly filesystem would 
> additionally attempt to refresh destinations with the path missing or a stale 
> version of the path using the nearest 

[jira] [Comment Edited] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113722#comment-16113722
 ] 

Sean Mackrory edited comment on HADOOP-13952 at 8/4/17 12:10 AM:
-

So: hadoop-tool-dist is including these dependencies in its share/tools/lib 
directory, but dist-layout-stitching is not copying it over because the 
findfileindir function reports that 1 copy is already included (under hdfs). 
But then when we're checking, we're not looking under hdfs, so we report it as 
missing. One of these should change. I think my preference would be to have 
findfileindir not be looking in hdfs (at least not for tools - maybe not for 
some other things too), because anyone not using HDFS for storage shouldn't be 
pulling in other HDFS-specific things into their classpath. [~aw] - any 
thoughts on that?


was (Author: mackrorysd):
So: hadoop-tool-dist is including these dependencies in its share/tools/lib 
directory, but dist-layout-stitching is not copying it over because the 
findfileindir function reports that 1 copy is already included (under hdfs). 
But then when we're checking, we're not looking under hdfs, so we report it as 
missing. One of these should change. I think my preference would be to have 
findfileindir not be looking in hdfs, because anyone not using HDFS shouldn't 
be pulling in other HDFS-specific things into their classpath. 

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113722#comment-16113722
 ] 

Sean Mackrory commented on HADOOP-13952:


So: hadoop-tool-dist is including these dependencies in its share/tools/lib 
directory, but dist-layout-stitching is not copying it over because the 
findfileindir function reports that 1 copy is already included (under hdfs). 
But then when we're checking, we're not looking under hdfs, so we report it as 
missing. One of these should change. I think my preference would be to have 
findfileindir not be looking in hdfs, because anyone not using HDFS shouldn't 
be pulling in other HDFS-specific things into their classpath. 

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113710#comment-16113710
 ] 

Andrew Wang commented on HADOOP-13578:
--

Believe it's fine, see this mailing list thread:

https://lists.apache.org/thread.html/55e4ebf60ef0d109c9f74eac92f58a01b2efc551ae321e5444e05772@%3Ccommon-dev.hadoop.apache.org%3E

>From Jason:

{quote}
I think we are OK to leave support for the zstd codec in the Hadoop code base. 
I asked Chris Mattman for clarification, noting that the support for the zstd 
codec requires the user to install the zstd headers and libraries and then 
configure it to be included in the native Hadoop build. The Hadoop releases are 
not shipping any zstd code (e.g.: headers or libraries) nor does it require 
zstd as a mandatory dependency. Here's what he said:
{quote}

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, 
> HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, 
> HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, 
> HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113704#comment-16113704
 ] 

Xiao Chen commented on HADOOP-14727:


Thanks [~jeagles] for quickly getting to this! The fix looks to be the correct 
direction to me.

Could you add a unit test?

And what confuses me is, for the 3.0.0 reproduction stack trace I pasted, this 
leak is actually coming from the {{resource instanceof InputStream}} code 
block. The stack's 
{{org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)}} 
points to [this 
line|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2696].
 (I added 1 line for debug logging locally). This is my initial confusion about 
responsibility of closing the streams.
Hand-verified that backporting patch 1 to the internal cluster doesn't make the 
{{CLOSE_WAIT}} go away. Thoughts?

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> 

[jira] [Commented] (HADOOP-14089) Shaded Hadoop client runtime includes non-shaded classes

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113700#comment-16113700
 ] 

Andrew Wang commented on HADOOP-14089:
--

This looks fine to me too. Is it expected that the build fails with the patch 
applied? It's flagging some javax, okio, microsoft files, among others.

> Shaded Hadoop client runtime includes non-shaded classes
> 
>
> Key: HADOOP-14089
> URL: https://issues.apache.org/jira/browse/HADOOP-14089
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: David Phillips
>Assignee: Sean Busbey
>Priority: Critical
> Attachments: HADOOP-14089.WIP.0.patch
>
>
> The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, 
> {{javax/ws}}, {{mozilla}}, etc.
> An easy way to verify this is to look at the contents of the jar:
> {code}
> jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop'
> {code}
> For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS 
> {{javax.ws}}, it makes sense for those to be normal dependencies in the POM 
> -- they are standard, so version conflicts shouldn't be a problem. The JSR 
> 305 annotations can be {{true}} since they aren't needed 
> at runtime (this is what Guava does).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-08-03 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113688#comment-16113688
 ] 

Esfandiar Manii commented on HADOOP-14598:
--

FsUrlStreamHandlerFactory L73-74, Could you please add a few lines of comment 
on why the protocols are added there so the reason wont be forgotten in the 
future.
FsUrlStreamHandlerFactory L73-74, I would create a util/private method which 
gets/(exists in) the factory and call put on all the list of protocols.
TestUrlStreamHandler.java, do we also need to include a test for invalid 
protocols?

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> 
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113649#comment-16113649
 ] 

Andrew Wang commented on HADOOP-13917:
--

Hey Sean, is this still planned for beta1?

> Ensure nightly builds run the integration tests for the shaded client
> -
>
> Key: HADOOP-13917
> URL: https://issues.apache.org/jira/browse/HADOOP-13917
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
>
> Either QBT or a different jenkins job should run our integration tests, 
> specifically the ones added for the shaded client.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2017-08-03 Thread Adam Kennedy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113647#comment-16113647
 ] 

Adam Kennedy commented on HADOOP-13578:
---

Is this impacted by LEGAL-303?

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, 
> HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, 
> HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, 
> HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113628#comment-16113628
 ] 

Andrew Wang commented on HADOOP-14731:
--

Do you want take that up instead? This is definitely the quick fix.

> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113620#comment-16113620
 ] 

Allen Wittenauer commented on HADOOP-14731:
---

I keep meaning to modify the mvn site build to actually generate and fetch them 
from target but keep forgetting.

> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113594#comment-16113594
 ] 

Daryn Sharp commented on HADOOP-9747:
-

I'd rather it not be moved out.  It's not exactly a "big and risky 
feature/improvement".  It does offer improvements via eliminating 
synchronization (all calls to getCurrentUser, getLoginUser, and relogins are 
class synchronized), but incidentally fixes some esoteric 
skeleton-in-the-closet potential privilege escalation.  Which is more risky?  
Faster and correct?  Slower and vulnerable?

The basic premise is that all calls to getCurrentUser, getLoginUser, and 
relogin do not need to be class synchronized.  A few marketing points since my 
prior bullets were design oriented.
# A UGI identity is truly immutable after inception as was originally intended. 
Ie. What was the principal?  From keytab or ticket cache?
# Removes instance-level synchronization since it's generally worthless 
(multiple UGIs share the same Subject)
# Removes class-level synchronization by moving class static principal/keytab 
into the Subject
# Add synchronization only where necessary to fix races with relogins 
corrupting the Subject
# Incidentally fixes root cause of issue that inspired the completely broken 
"external ugi" hack
# Multiple logged in UGIs actually work correctly due to elimination of class 
statics.
# Incidentally prevents relogin of 1 UGI causing another UGI to morph (see 
linked jira)

It's really not that bad.  About 50% of the patch is adding lots of great tests 
since UGI tests are sparse.  I've been waiting 4 years to integrate this patch. 
 I gave up and added workarounds in the IPC layer and NN.  But then along came 
EZ file EDEK fetching causing high UGI contention...

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14731:
-
Attachment: HADOOP-14731.001.patch

Patch attached, git status is clean now after a site build.

> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14731:
-
Status: Patch Available  (was: Open)

> Update gitignore to exclude output of site build
> 
>
> Key: HADOOP-14731
> URL: https://issues.apache.org/jira/browse/HADOOP-14731
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, site
>Affects Versions: 3.0.0-alpha3
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-14731.001.patch
>
>
> Site build generates a bunch of files that aren't caught by gitignore, let's 
> update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14731) Update gitignore to exclude output of site build

2017-08-03 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14731:


 Summary: Update gitignore to exclude output of site build
 Key: HADOOP-14731
 URL: https://issues.apache.org/jira/browse/HADOOP-14731
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, site
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang
Assignee: Andrew Wang


Site build generates a bunch of files that aren't caught by gitignore, let's 
update.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113523#comment-16113523
 ] 

Sean Mackrory edited comment on HADOOP-13952 at 8/3/17 9:29 PM:


I couldn't find much official documentation on the formatting of 
dependency:list's output so this is based on a few assumptions I don't 100% 
like, but it works well right now and I don't see better options.

There are 5 fields by default, 6 if there's a classifier. So I'm constructing 
those fields into the JAR name based on the number of fields. Including the 
classifier when there are 6 fields appears to fix both the json-lib issue as 
well as the jetty-util-ajax issue (though that one surprises me).

I'm also exiting in the event there are NOT 5 or 6 fields, and exiting if there 
are other errors. Still have to fix the okio and okhttp issues, though.


was (Author: mackrorysd):
I couldn't find much official documentation on the formatting of 
dependency:list's output so this is based on a few assumptions I don't 100% 
like, but it works well right now and I don't see better options.

There are 5 fields by default, 6 if there's a classifier. So I'm constructing 
those fields into the JAR name based on the number of fields. Including the 
classifier when there are 6 fields appears to fix both the json-lib issue as 
well as the jetty-util-ajax issue.

I'm also exiting in the event there are NOT 5 or 6 fields, and exiting if there 
are other errors. Still have to fix the okio and okhttp issues, though.

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13952:
---
Attachment: HADOOP-13952.preview.patch

I couldn't find much official documentation on the formatting of 
dependency:list's output so this is based on a few assumptions I don't 100% 
like, but it works well right now and I don't see better options.

There are 5 fields by default, 6 if there's a classifier. So I'm constructing 
those fields into the JAR name based on the number of fields. Including the 
classifier when there are 6 fields appears to fix both the json-lib issue as 
well as the jetty-util-ajax issue.

I'm also exiting in the event there are NOT 5 or 6 fields, and exiting if there 
are other errors. Still have to fix the okio and okhttp issues, though.

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory reassigned HADOOP-13952:
--

Assignee: Sean Mackrory

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13952.preview.patch
>
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113462#comment-16113462
 ] 

Esfandiar Manii commented on HADOOP-14715:
--

This is a regression, I am preparing a fix

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113441#comment-16113441
 ] 

Vishwajeet Dusane commented on HADOOP-14730:


Thanks [~chris.douglas] for Patch 003 and +1 on the change.

Had a offline sync with [~chris.douglas]. Agree that HDFS-6984 is not backward 
compatible change. So as {{hadoop-azure-datalake.jar}} with Patch 003 would not 
link to Hadoop 2.x common jar, mainly for the HDFS-6984 introduced new 
{{FileStatus}} constructor. Patching Hadoop 2.X cluster with Hadoop Adl jar 
will not work.

CC: [~liuml07] and [~jzhuge]



> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14727:
-
Status: Patch Available  (was: Open)

Verified CLOSE_WAIT sockets were being leaked on branch-2 jobhistory server 
with simple lsof | gerp CLOSE_WAIT and reloading specific mapreduce job 
configuration reload. With patch, no CLOSE_WAIT sockets are left. Fix was to 
flag InputStreams as auto-close if they are opened by Configuration and leave 
them as is if InputStream was passed as a resource to prevent closing an 
InputStream opened by the user.

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha4, 2.9.0
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> 

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14727:
-
Attachment: HADOOP-14727.001-branch-2.patch
HADOOP-14727.001.patch

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> at 
> 

[jira] [Commented] (HADOOP-14726) Remove FileStatus#isDir

2017-08-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113415#comment-16113415
 ] 

Chris Douglas commented on HADOOP-14726:


bq.  I think it's too late to remove at this point in the release cycle; if 
we're serious about doing this for Hadoop 4, then let's file JIRAs for these 
downstreams to switch over.
Hrm; seems like it's too late to remove this, ever. So it goes.

bq. It is still incompatible though, since marking isDir final breaks 
out-of-tree FileSystems that override it. Is this necessary?
It's not necessary, it's the most aggressive variant we could push in Hadoop 3. 
I could make something up about JIT efficiency when {{isDir}} is final, but the 
only real argument in favor is to make sure FileSystem implementors override 
these two calls consistently. It's a corner of a corner case, either way.

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113403#comment-16113403
 ] 

Hadoop QA commented on HADOOP-14696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880100/HADOOP-14696.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 665fb6929984 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5d256c |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12940/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-03 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113390#comment-16113390
 ] 

Junping Du commented on HADOOP-14284:
-

bq. if we really want to get this into beta1, we'll need to really push harder.
We are still in discussion for a decent solution. From my understanding, this 
shouldn't be a real blocker for beta.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113385#comment-16113385
 ] 

Hadoop QA commented on HADOOP-12077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m  5s{color} 
| {color:red} root generated 5 new + 1418 unchanged - 0 fixed = 1423 total (was 
1418) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 11 new + 159 unchanged 
- 5 fixed = 170 total (was 164) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-common-project/hadoop-common generated 5 new + 
0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[], Configuration, 
String)  At 
NflyFSystem.java:org.apache.hadoop.fs.viewfs.NflyFSystem.createFileSystem(URI[],
 Configuration, String)  At NflyFSystem.java:[line 933] |
|  |  org.apache.hadoop.fs.viewfs.NflyFSystem$MRNflyNode doesn't override 
org.apache.hadoop.net.NodeBase.equals(Object)  At NflyFSystem.java:At 
NflyFSystem.java:[line 1] |
|  |  org.apache.hadoop.fs.viewfs.NflyFSystem$NflyNode doesn't override 
org.apache.hadoop.net.NodeBase.equals(Object)  At NflyFSystem.java:At 
NflyFSystem.java:[line 1] |
|  |  org.apache.hadoop.fs.viewfs.NflyFSystem$NflyStatus overrides equals in 
org.apache.hadoop.fs.FileStatus and may not be symmetric  At 
NflyFSystem.java:and may not be symmetric  At NflyFSystem.java:[lines 523-526] |
|  |  Class org.apache.hadoop.fs.viewfs.NflyFSystem$NflyStatus defines 
non-transient non-serializable instance field realFs 

[jira] [Commented] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113372#comment-16113372
 ] 

Hadoop QA commented on HADOOP-14628:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 70m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  2m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 27s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14628 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880257/HADOOP-14628.001-tests.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 587d498ed75d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5d256c |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12938/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12938/testReport/ |
| modules | C: hadoop-build-tools hadoop-project 
hadoop-common-project/hadoop-annotations hadoop-project-dist hadoop-assemblies 
hadoop-maven-plugins hadoop-common-project/hadoop-minikdc 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-auth-examples 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs 
hadoop-common-project/hadoop-kms hadoop-common-project 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs-nfs 
hadoop-hdfs-project hadoop-yarn-project/hadoop-yarn 

[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113308#comment-16113308
 ] 

Hadoop QA commented on HADOOP-14553:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 99 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
33s{color} | {color:green} root generated 0 new + 1416 unchanged - 2 fixed = 
1416 total (was 1418) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 28s{color} | {color:orange} root: The patch generated 150 new + 203 
unchanged - 177 fixed = 353 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880255/HADOOP-14553-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 0a6e579197b7 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 293c74a |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12944/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12944/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14730:
---
Attachment: HADOOP-14730.003.patch

Alternative patch, if the deprecated {{FsPermission}} behavior is still 
required.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch, 
> HADOOP-14730.003.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14730:
---
Attachment: HADOOP-14730.002.patch

Alternative changing the ADL client to call the appropriate constructor, so we 
can remove the ADLPermission class.

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch, HADOOP-14730.002.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14126) remove jackson, joda and other transient aws SDK dependencies from hadoop-aws

2017-08-03 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113271#comment-16113271
 ] 

Lei (Eddy) Xu commented on HADOOP-14126:


+1. Thanks for taking care of this, [~ste...@apache.org]!

> remove jackson, joda and other transient aws SDK dependencies from hadoop-aws
> -
>
> Key: HADOOP-14126
> URL: https://issues.apache.org/jira/browse/HADOOP-14126
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14126-001.patch
>
>
> With HADOOP-14040 in, we can cut out all declarations of dependencies on 
> jackson, joda-time  from the hadoop-aws module, so avoiding it confusing 
> downstream projects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14726) Remove FileStatus#isDir

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113253#comment-16113253
 ] 

Andrew Wang commented on HADOOP-14726:
--

I did a grep and there's a lot of hits for "isDir(" in downstream projects: 
Avro, Crunch, HBase, Hive, Hue, Kite, Oozie, Parquet, Pig, Sqoop, Zookeeper. I 
think it's too late to remove at this point in the release cycle; if we're 
serious about doing this for Hadoop 4, then let's file JIRAs for these 
downstreams to switch over.

The idea of v000 seems okay to me too. It is still incompatible though, since 
marking isDir {{final}} breaks out-of-tree FileSystems that override it. Is 
this necessary?

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14223) Extend FileStatus#toString() to include details like Erasure Coding and Encryption

2017-08-03 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113247#comment-16113247
 ] 

Manoj Govindassamy commented on HADOOP-14223:
-

[~vishwajeet.dusane],
  Thanks for reporting the test issues and for the debugging. Much appreciated.

> Extend FileStatus#toString() to include details like Erasure Coding and 
> Encryption
> --
>
> Key: HADOOP-14223
> URL: https://issues.apache.org/jira/browse/HADOOP-14223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha4
>
> Attachments: HADOOP-14223.01.patch, HADOOP-14223.02.patch
>
>
> HDFS-6843 and HADOOP-13715 have enhanced {{FileStatus}} to include details on 
> whether the underlying path is Encrypted and Erasure Coded. The additional 
> details are embedded in the FsPermission high order bits. It would be really 
> helpful for debugging if FileStatus#toString() returns these new bits details 
> along with already existing one. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14726) Remove FileStatus#isDir

2017-08-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113216#comment-16113216
 ] 

Chris Douglas commented on HADOOP-14726:


bq. if it's cut, what downstream apps stop building?
No idea; I don't have access to an environment to test that.

v000 seems like the better solution. There's little harm in keeping this call- 
even though it's been deprecated since 2010- since removing it would certainly 
break some applications.

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-03 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113215#comment-16113215
 ] 

Atul Sikaria commented on HADOOP-14565:
---

[~rywater], this is unrelated to your patch. I saw this happen on mine as well, 
after I did a git pull. [~vishwajeet.dusane] is looking into this.

tldr version (Vishwajeet can provide more details): New ACL checks were 
introduced by commit 4966a6e2, and there was a recent change that is causing 
these tests to fail.

[~vishwajeet.dusane] has opened HADOOP-14730 for this.


> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Attachments: 
> HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-03 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113214#comment-16113214
 ] 

Ravi Prakash commented on HADOOP-14439:
---

Could you please also set the Target version on the JIRA?

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-03 Thread Ryan Waters (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113204#comment-16113204
 ] 

Ryan Waters commented on HADOOP-14565:
--

These tests fail on trunk without my patch as well. 

> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Attachments: 
> HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113203#comment-16113203
 ] 

Steve Loughran commented on HADOOP-14715:
-

Full tests work. Notice how the test run is 4x slower than the parallel one
{code}

Results :

Tests run: 773, Failures: 0, Errors: 0, Skipped: 155

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 41:57 min (Wall Clock)
[INFO] Finished at: 2017-08-03T19:02:32+01:00
{code}

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113201#comment-16113201
 ] 

Hadoop QA commented on HADOOP-14730:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 12s{color} 
| {color:red} root generated 1 new + 1418 unchanged - 0 fixed = 1419 total (was 
1418) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880204/HADOOP-14730.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 915bd78839d3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5d256c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12941/artifact/patchprocess/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12941/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12941/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 

[jira] [Commented] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113195#comment-16113195
 ] 

Thomas Marquardt commented on HADOOP-14715:
---

Thanks Steve, +1 from me.  I will also follow up on TestWasbRemoteCallHelper 
and why it fails when fs.azure.secure.mode is true.

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14126) remove jackson, joda and other transient aws SDK dependencies from hadoop-aws

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113173#comment-16113173
 ] 

Steve Loughran commented on HADOOP-14126:
-

Any reviewers? [~eddyxu]?

> remove jackson, joda and other transient aws SDK dependencies from hadoop-aws
> -
>
> Key: HADOOP-14126
> URL: https://issues.apache.org/jira/browse/HADOOP-14126
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14126-001.patch
>
>
> With HADOOP-14040 in, we can cut out all declarations of dependencies on 
> jackson, joda-time  from the hadoop-aws module, so avoiding it confusing 
> downstream projects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113169#comment-16113169
 ] 

Steve Loughran commented on HADOOP-14103:
-

what about proposing just setting one option to the value of the other
{code}
 
fs.contract.test.fs.s3a
${test.fs.s3a.name}
  
{code}


> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch, HADOOP-14103.002.patch, 
> HADOOP-14103.003.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113101#comment-16113101
 ] 

Esfandiar Manii edited comment on HADOOP-14722 at 8/3/17 5:34 PM:
--

BlockBlobInputStream.java: L92-94: streamPosition - streamBufferLength + 
streamBufferPosition, can this become negative?
BlockBlobInputStream.java: L133: don't we need to nullify streamBuffer too?
BlockBlobInputStream.java: L321-323: Why dont you throw the exception right at 
the beginning? 
BlockBlobInputStream.java: L314: Overall I am not a big fan of having nested if 
and elses because its making code more complicated that needed. lets just 
return instead of creating else.
For example
public synchronized long skip(long n) throws IOException {
 checkState();
long skipped;
 if (blobInputStream != null) {
  skipped = blobInputStream.skip(n);
  streamPosition += skipped;
  return skipped;
 }

 if (n < 0 || n > streamLength - streamPosition) {
 throw new IndexOutOfBoundsException("skip range");
 }
 
 if (streamBuffer == null) {
   streamPosition += n;
   return n;
 }

if (n < streamBufferLength - streamBufferPosition) {
  streamBufferPosition += (int) n;
   } else {
  streamBufferPosition = 0;
  streamBufferLength = 0;
  streamPosition = getPos() + n;
   }
 return skipped;
}

BlockBlobInputStream.java: L330: I'd suggest create a private method which 
clears the buffer and get rid of all the custom streamBufferPosition = 0; 
streamBufferLength = 0 and etc.



was (Author: esmanii):
BlockBlobInputStream.java: L92-94: streamPosition - streamBufferLength + 
streamBufferPosition, can this become negative?
BlockBlobInputStream.java: L133: don't we need to nullify streamBuffer too?
BlockBlobInputStream.java: L321-323: Why dont you throw the exception right at 
the beginning? 
BlockBlobInputStream.java: L314: Overall I am not a big fan of having nested if 
and elses because its making code more complicated that needed. lets just 
return instead of creating else.
For example
public synchronized long skip(long n) throws IOException {
 checkState();
long skipped;
 if (blobInputStream != null) {
  skipped = blobInputStream.skip(n);
  streamPosition += skipped;
  return skipped;
 }

 if (n < 0 || n > streamLength - streamPosition) {
 throw new IndexOutOfBoundsException("skip range");
 }
 
 if (streamBuffer == null) {
   streamPosition += n;
   return n;
 }

if (n < streamBufferLength - streamBufferPosition) {
  streamBufferPosition += (int) n;
   } else {
  streamBufferPosition = 0;
  streamBufferLength = 0;
  streamPosition = getPos() + n;
   }
 return skipped;
}

BlockBlobInputStream.java: L330: I'd suggest clear a private method which 
clears the buffer and get rid of all the custom streamBufferPosition = 0; 
streamBufferLength = 0 and etc.


> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113143#comment-16113143
 ] 

Hadoop QA commented on HADOOP-14498:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 4s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880256/HADOOP-14498.003.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux b531bc262252 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 293c74a |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12943/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12943/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch, 
> HADOOP-14498.003.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14715:
---

 Assignee: Steve Loughran
Affects Version/s: 2.9.0
 Target Version/s: 2.9.0

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Status: Patch Available  (was: Open)

Cause of this is apparently secure mode being enabled by default. Commenting 
out that property in the XML file fixes things. Patch 001 does that, with a 
description on when to turn it on

Applying this patch to the parallelised test runs of HADOOP-14553 fixes it 
there with all the other tests completing; applying it to trunk fixes 
{{TestWasbRemoteCallHelper}}. Full test in progress; endpoint azure ireland

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 2.749 sec - 
in org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper

Results :

Tests run: 10, Failures: 0, Errors: 0, Skipped: 10
{code}


> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Attachment: HADOOP-14715-001.patch

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
> Attachments: HADOOP-14715-001.patch
>
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113112#comment-16113112
 ] 

Hadoop QA commented on HADOOP-14439:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14439 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880170/HADOOP-14439-02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 43bbc7bc8fb0 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5d256c |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12939/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12939/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up 

[jira] [Comment Edited] (HADOOP-14696) parallel tests don't work for Windows

2017-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113031#comment-16113031
 ] 

Allen Wittenauer edited comment on HADOOP-14696 at 8/3/17 5:10 PM:
---

bq. could we just define some property for the path, e.g $separator , then use 
that in the property defs?
bq. there's always the ant  command.

Unfortunately, it's not that simple. We need to be able to create a set of 
directories per thread.  The thread count is defined at runtime.  This means a 
loop.  That eliminated ; I couldn't figure out how to loop other than 
the method that [~cnauroth] used when they wrote the original antrun+JavaScript 
code.  The loop (obviously) needs to take input from maven properties.  These 
properties are calculated based upon standard Maven ones (which are built way 
before it even reads the pom.xml). Maven itself stores as full Windows paths.  
So we're popping these maven properties into the antrun JavaScript. 

The problem is that JavaScript (correctly?)  interprets Windows backslashes as 
escapes.  So instead of C:\Tools\Source it gets turned into C:ToolsSource.  Now 
it's possible to switch languages (Groovy, JRuby, Jython, etc).  This brings 
about three new problems:
* Will they handle the path problems on their own?  How will they deal with 
given a path that looks like C:\Source\hadoop/target ?
* It adds yet more downloaded dependencies into the build.
* Do we really want to add Yet Another Language to the build system?

I opted for the devil we know and addded the new converted properties in a way 
that they are available in all descendant poms.  As we get more modules running 
in parallel (I'm working on rebasing MAPREDUCE-4980), they'll be able to use 
the same converted properties.  

bq. as the AWS patch policy is "always declare the endpoint you've tested 
against", which s3 endpoint have you tested with?

I didn't.  The parallel-tests code is already present in the hadoop-aws 
pom.xml. If hadoop-aws unit tests don't work in parallel on Linux now, then 
that profile shouldn't be there. 


was (Author: aw):

bq. could we just define some property for the path, e.g $separator , then use 
that in the property defs?
bq. there's always the ant  command.

Unfortunately, it's not that simple. We need to be able to create a set of 
directories per thread.  The thread count is defined at runtime.  This means a 
loop.  That eliminated ; I couldn't figure out how to loop other than 
the method that [~cnauroth] used when they wrote the original antrun+JavaScript 
code.  The loop (obviously) needs to take input from maven properties.  These 
properties are calculated based upon standard Maven ones (which are built way 
before it even reads the pom.xml). Maven itself stores as full Windows paths.  
So we're popping these maven properties into the antrun JavaScript. 

The problem is that JavaScript (correctly?)  interprets Windows backslashes as 
escapes.  So instead of C:\Tools\Source it gets turned into C:ToolsSource.  Now 
it's possible to switch languages (Groovy, JRuby, Jython, etc).  This brings 
about three new problems:
* Will they handle the path problems on their own?  How will they deal with 
given a path that looks like C:\Source\hadoop/target ?
* It adds yet more downloaded dependencies into the build.
* Do we really want to add Yet Another Language to the build system?

I opted for the devil we know and addded the new converted properties in a way 
that they are available in all descendant poms.  As we get more modules running 
in parallel (I'm working on rebasing MAPREDUCE-6674), they'll be able to use 
the same converted properties.  

bq. as the AWS patch policy is "always declare the endpoint you've tested 
against", which s3 endpoint have you tested with?

I didn't.  The parallel-tests code is already present in the hadoop-aws 
pom.xml. If hadoop-aws unit tests don't work in parallel on Linux now, then 
that profile shouldn't be there. 

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> 

[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113103#comment-16113103
 ] 

Steve Loughran commented on HADOOP-14696:
-

# you are right about mkdir and the parallelisation: it's not going to work
# if the code spans >1 module and it fixes the others, I'll trust you on aws

Theoretically, the javascript could actually invoke 
org.apache.tools.ant.taskdefs.PathConvert and do the conversion 
programmatically, but I wouldn't rush to do it: it'd be more complex than what 
you've done

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread E. Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113101#comment-16113101
 ] 

E. Manii commented on HADOOP-14722:
---

BlockBlobInputStream.java: L92-94: streamPosition - streamBufferLength + 
streamBufferPosition, can this become negative?
BlockBlobInputStream.java: L133: don't we need to nullify streamBuffer too?
BlockBlobInputStream.java: L321-323: Why dont you throw the exception right at 
the beginning? 
BlockBlobInputStream.java: L314: Overall I am not a big fan of having nested if 
and elses because its making code more complicated that needed. lets just 
return instead of creating else.
For example
public synchronized long skip(long n) throws IOException {
 checkState();
long skipped;
 if (blobInputStream != null) {
  skipped = blobInputStream.skip(n);
  streamPosition += skipped;
  return skipped;
 }

 if (n < 0 || n > streamLength - streamPosition) {
 throw new IndexOutOfBoundsException("skip range");
 }
 
 if (streamBuffer == null) {
   streamPosition += n;
   return n;
 }

if (n < streamBufferLength - streamBufferPosition) {
  streamBufferPosition += (int) n;
   } else {
  streamBufferPosition = 0;
  streamBufferLength = 0;
  streamPosition = getPos() + n;
   }
 return skipped;
}

BlockBlobInputStream.java: L330: I'd suggest clear a private method which 
clears the buffer and get rid of all the custom streamBufferPosition = 0; 
streamBufferLength = 0 and etc.


> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13952) tools dependency hooks are throwing errors

2017-08-03 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113086#comment-16113086
 ] 

Sean Mackrory commented on HADOOP-13952:


FYI most of these appear to have been fixed. What remains is:
{code}
ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
ERROR: hadoop-azure has missing dependencies: 
jetty-util-ajax-9.3.11.v20160721.jar
ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
{code}

The Azure dependencies are all in the HDFS directory:
{code}
./hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/okhttp-2.4.0.jar
./hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/okio-1.4.0.jar
/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.11.v20160721.jar
{code}

The Aliyun dependency appears to be getting the 2.4 stripped from the version:

{code}
./hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/tools/lib/json-lib-2.4-jdk15.jar
{code}

> tools dependency hooks are throwing errors
> --
>
> Key: HADOOP-13952
> URL: https://issues.apache.org/jira/browse/HADOOP-13952
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Critical
>
> During build, we are throwing these errors:
> {code}
> ERROR: hadoop-aliyun has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aliyun has missing dependencies: json-lib-jdk15.jar
> ERROR: hadoop-archive-logs has missing dependencies: 
> jasper-compiler-5.5.23.jar
> ERROR: hadoop-archives has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-aws has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-azure has missing dependencies: 
> jetty-util-ajax-9.3.11.v20160721.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okhttp-2.4.0.jar
> ERROR: hadoop-azure-datalake has missing dependencies: okio-1.4.0.jar
> ERROR: hadoop-extras has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-gridmix has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-kafka has missing dependencies: lz4-1.2.0.jar
> ERROR: hadoop-kafka has missing dependencies: kafka-clients-0.8.2.1.jar
> ERROR: hadoop-openstack has missing dependencies: commons-httpclient-3.1.jar
> ERROR: hadoop-rumen has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: jasper-compiler-5.5.23.jar
> ERROR: hadoop-sls has missing dependencies: metrics-core-3.0.1.jar
> ERROR: hadoop-streaming has missing dependencies: jasper-compiler-5.5.23.jar
> {code}
> Likely a variety of reasons for the failures.  Kafka is HADOOP-12556, but 
> others need to be investigated.  Probably just need to look at more than just 
> common/lib in dist-tools-hooks-maker now that shading has gone in.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-03 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113084#comment-16113084
 ] 

Wei-Chiu Chuang commented on HADOOP-14284:
--

Hi [~ozawa] and everyone thanks for the work here.
if we really want to get this into beta1, we'll need to really push harder.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14599) RPC queue time metrics omit timed out clients

2017-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113082#comment-16113082
 ] 

Daryn Sharp commented on HADOOP-14599:
--

General implementation issues:
# No need to change UGI.  Revert them.
# Don't change {{RpcProtobufRequest#getRequestHeader}} to convert IOE to an 
illegal arg.
# In {{NamenodeWebHdfsMethods#doAsExternalCall}}, the changed indentation of 
methods like {{getHostInetAddress}} and {{getDeclaringClassProtocolName}} 
violate style guidelines.
# {{WritableRpcEngine#call}} doesn't appear to need the finally clause anymore?
# Is the change in {{Server}} to the deferred response handling is necessary?  
It's subtlety changing the behavior.
# In the finally block that updates the metrics, please update _after_ clearing 
the call and closing the scope.  If for some reason the metrics update blow up, 
the handler will be left in an inconsistent state.

Most importantly: The queue time for skipped calls is recorded = great!; _but 
with a processing time of 0_ = bad.  As the call queue becomes congested with 
timing out clients, the average processing time will plummet and artificially 
make performance appear great when it's not.  The updates to queue time and 
processing time need to be independent.

> RPC queue time metrics omit timed out clients
> -
>
> Key: HADOOP-14599
> URL: https://issues.apache.org/jira/browse/HADOOP-14599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, rpc-server
>Affects Versions: 2.7.0
>Reporter: Ashwin Ramesh
>Assignee: Ashwin Ramesh
> Attachments: HADOOP-14599.001.patch, HADOOP-14599-002.patch, 
> HADOOP-14599-003.patch, HADOOP-14599-004.patch
>
>
> RPC average queue time metrics will now update even if the client who made 
> the call timed out while the call was in the call queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-14727:
--

Assignee: Jonathan Eagles

Thanks Steve and Jonathan! Assigning this jira to Jonathan, I can help with 
reviews.

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Assignee: Jonathan Eagles
>Priority: Blocker
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> at 
> 

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14727:
---
Description: 
This is caught by Cloudera's internal testing over the alpha4 release.

We got reports that some hosts ran out of FDs. Triaging that, found out both 
oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
state.

[~haibochen] helped narrow down to a consistent reproduction by simply visiting 
the JHS web UI, and clicking through a job and its logs.

I then look at the {{BlockReaderRemote}} and related code, and didn't spot any 
leaks in the implementation. After adding a debug log whenever a {{Peer}} is 
created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
sockets are created from this call stack:
{noformat}
2017-08-02 13:58:59,901 INFO 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
java.lang.Exception: test
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
at 
com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
at 
com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
at 
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
at 
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277)
at 

[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113031#comment-16113031
 ] 

Allen Wittenauer commented on HADOOP-14696:
---


bq. could we just define some property for the path, e.g $separator , then use 
that in the property defs?
bq. there's always the ant  command.

Unfortunately, it's not that simple. We need to be able to create a set of 
directories per thread.  The thread count is defined at runtime.  This means a 
loop.  That eliminated ; I couldn't figure out how to loop other than 
the method that [~cnauroth] used when they wrote the original antrun+JavaScript 
code.  The loop (obviously) needs to take input from maven properties.  These 
properties are calculated based upon standard Maven ones (which are built way 
before it even reads the pom.xml). Maven itself stores as full Windows paths.  
So we're popping these maven properties into the antrun JavaScript. 

The problem is that JavaScript (correctly?)  interprets Windows backslashes as 
escapes.  So instead of C:\Tools\Source it gets turned into C:ToolsSource.  Now 
it's possible to switch languages (Groovy, JRuby, Jython, etc).  This brings 
about three new problems:
* Will they handle the path problems on their own?  How will they deal with 
given a path that looks like C:\Source\hadoop/target ?
* It adds yet more downloaded dependencies into the build.
* Do we really want to add Yet Another Language to the build system?

I opted for the devil we know and addded the new converted properties in a way 
that they are available in all descendant poms.  As we get more modules running 
in parallel (I'm working on rebasing MAPREDUCE-6674), they'll be able to use 
the same converted properties.  

bq. as the AWS patch policy is "always declare the endpoint you've tested 
against", which s3 endpoint have you tested with?

I didn't.  The parallel-tests code is already present in the hadoop-aws 
pom.xml. If hadoop-aws unit tests don't work in parallel on Linux now, then 
that profile shouldn't be there. 

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-03 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113023#comment-16113023
 ] 

Gera Shegalov commented on HADOOP-12077:


Thank you [~chris.douglas]

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts the set of most recent URIs topologically in the same manner as 
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
> underlying destinations. With repairOnRead, Nfly filesystem would 
> additionally attempt to refresh destinations with the path missing or a stale 
> version of the path using the nearest available most recent destination. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HADOOP-14628) Upgrade maven enforcer plugin to 3.0.0

2017-08-03 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14628:
---
Attachment: HADOOP-14628.001-tests.patch

001-tests: 001 patch + add empty line to all the pom.xml to run all the tests.

> Upgrade maven enforcer plugin to 3.0.0
> --
>
> Key: HADOOP-14628
> URL: https://issues.apache.org/jira/browse/HADOOP-14628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-14626-testing.02.patch, 
> HADOOP-14626-testing.03.patch, HADOOP-14626.testing.patch, 
> HADOOP-14628.001.patch, HADOOP-14628.001-tests.patch
>
>
> Maven enforcer plugin fails after Java 9 build 175 (MENFORCER-274). Let's 
> upgrade the version to 3.0.0 when released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-03 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14498:
---
Attachment: HADOOP-14498.003.patch

.003 patch address shellcheck issues and refactors the loop into a shared 
hadoop_join_array function.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch, 
> HADOOP-14498.003.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112972#comment-16112972
 ] 

Steve Loughran commented on HADOOP-14553:
-

Note that as this lifted the parallel test code from hadoop-aws, it will depend 
on the final fix for HADOOP-14696 being copied over to actually work on windows

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Patch Available  (was: Open)

Tested: Azure ireland.

Test run without scale tests complete in just under 13 minutes; with scale: 21. 
That's better than before and with scope for speedup, as much of the test time 
is those IT tests not running in the parallel phase
{code}

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.353 sec - in 
org.apache.hadoop.fs.azure.metrics.TestBandwidthGaugeUpdater
Running 
org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.753 sec - in 
org.apache.hadoop.fs.azure.metrics.TestNativeAzureFileSystemMetricsSystem
Running org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in 
org.apache.hadoop.fs.azure.metrics.TestRollingWindowAverage
Running org.apache.hadoop.fs.azure.TestBlobMetadata
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec - in 
org.apache.hadoop.fs.azure.TestBlobMetadata
Running org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.211 sec - in 
org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Running org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.787 sec - in 
org.apache.hadoop.fs.azure.TestClientThrottlingAnalyzer
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
Tests run: 24, Failures: 0, Errors: 0, Skipped: 24, Time elapsed: 4.568 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorization
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.87 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.04 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.315 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.81 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.246 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.424 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
Tests run: 3, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.069 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemUploadLogic
Running org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.851 sec - in 
org.apache.hadoop.fs.azure.TestOutOfBandAzureBlobOperations
Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 0.107 sec - in 
org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Running org.apache.hadoop.fs.azure.TestWasbFsck
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.712 sec - in 
org.apache.hadoop.fs.azure.TestWasbFsck

Results :

Tests run: 214, Failures: 0, Errors: 0, Skipped: 35

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-azure ---
[INFO] Building jar: 
/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-azure/target/hadoop-azure-3.0.0-beta1-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (default) @ hadoop-azure ---
[INFO] Building jar: 
/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-azure/target/hadoop-azure-3.0.0-beta1-SNAPSHOT-tests.jar
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-azure ---
[INFO] 
[INFO] --- maven-failsafe-plugin:2.17:integration-test 
(default-integration-test) @ hadoop-azure ---
[INFO] Failsafe report directory: 
/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-azure/target/failsafe-reports


[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112954#comment-16112954
 ] 

Steve Loughran commented on HADOOP-14553:
-

patch 009
# rebased onto trunk
# added a unified base class, {{AbstractWasbTestWithTimeout}}, which just 
extends Assert with test timeout and thread naming. This is now the base class 
for all tests, even those which don't extend {{AbstractWasbTestBase}} (which 
itself extends the class)
# {{AbstractWasbTestBase}} structured better for subclassing, including 
creating the configuration for test accounts
# {{AbstractAzureScaleTest}} direct child of {{AbstractWasbTestBase}}  (i.e cut 
any intermediate IntegrationTest class)
# {{ITestBlockBlobInputStream}} is a scale test, as is 
{{ITestAzureNativeContractDistCp}}. This moves the core bandwidth-heavy tests 
into the scale group
# Also: misc other cleanups, fix javadocs so yetus won't complain.

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Attachment: HADOOP-14553-009.patch

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Open  (was: Patch Available)

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112947#comment-16112947
 ] 

Steve Loughran commented on HADOOP-14439:
-

as usual, which s3 endpoint did you run all the hadoop-aws integration tests 
against?

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-03 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14439:
---

Assignee: Vinayakumar B

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-03 Thread Ryan Waters (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112930#comment-16112930
 ] 

Ryan Waters commented on HADOOP-14565:
--

Hmm. Didn't get this while running test-patch locally. Will investigate and 
resubmit another patch. 

> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Attachments: 
> HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14730) hasAcl property always set to false, regardless of FsPermission higher bit order

2017-08-03 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-14730:
---
Status: Patch Available  (was: Open)

> hasAcl property always set to false, regardless of FsPermission higher bit 
> order 
> -
>
> Key: HADOOP-14730
> URL: https://issues.apache.org/jira/browse/HADOOP-14730
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Vishwajeet Dusane
>Assignee: Chris Douglas
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14730.001.patch
>
>
> 2 Unit Test cases are failing  [Azure-data-lake Module 
> |https://github.com/apache/hadoop/blob/4966a6e26e45d7dc36e0b270066ff7c87bcd00cc/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestGetFileStatus.java#L44-L44],
>  caused after HDFS-6984 commit.
> Issue seems to be {{hasAcl}} is hard coded to {{false}}. 
> {code:java}
> public FileStatus(long length, boolean isdir,
> int block_replication,
> long blocksize, long modification_time, long access_time,
> FsPermission permission, String owner, String group, 
> Path symlink,
> Path path) {
> this(length, isdir, block_replication, blocksize, modification_time,
> access_time, permission, owner, group, symlink, path,
> false, false, false);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-03 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112864#comment-16112864
 ] 

Jonathan Eagles commented on HADOOP-14727:
--

Thanks for looking into this. Will try to post a patch today regarding this.

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Priority: Blocker
>
> This is caught by Cloudera's internal testing over the alpha3 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
> at 

[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-03 Thread Shane Mainali (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112807#comment-16112807
 ] 

Shane Mainali commented on HADOOP-14722:


Thanks [~tmarquardt]!

Does the test cover all of the seek and read scenarios with the stream internal 
buffer? It'll also be good to add some comments in the test you added. Assuming 
we are covered in the tests, +1 here from my side.

> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13134) WASB's file delete still throwing Blob not found exception

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112622#comment-16112622
 ] 

Steve Loughran commented on HADOOP-13134:
-

Looking into the cause, it's because the parent directory of a deleted object 
doesn't exist. This may be a race condition in the delete, or some ordering of 
the delete process, maybe

> WASB's file delete still throwing Blob not found exception
> --
>
> Key: HADOOP-13134
> URL: https://issues.apache.org/jira/browse/HADOOP-13134
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.1
>Reporter: Lin Chan
>Assignee: Dushyanth
>
> WASB is still throwing blob not found exception as shown in the following 
> stack. Need to catch that and convert to Boolean return code in WASB delete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13134) WASB's file delete still throwing Blob not found exception

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112613#comment-16112613
 ] 

Steve Loughran commented on HADOOP-13134:
-

WORKAROUND: tell the job committer to ignore failures in cleanup

As discussed in [Spark Cloud 
Integration|https://github.com/apache/spark/blob/master/docs/cloud-integration.md],
 you can downgrade failures during cleanup to warnings. I recommend this 
against object stores for a slightly more robust commit, given that directory 
delete is a more complex/brittle operation.& more prone to failures
{code}
spark.hadoop.mapreduce.fileoutputcommitter.cleanup-failures.ignored true
{code}

> WASB's file delete still throwing Blob not found exception
> --
>
> Key: HADOOP-13134
> URL: https://issues.apache.org/jira/browse/HADOOP-13134
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.1
>Reporter: Lin Chan
>Assignee: Dushyanth
>
> WASB is still throwing blob not found exception as shown in the following 
> stack. Need to catch that and convert to Boolean return code in WASB delete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-08-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112510#comment-16112510
 ] 

Steve Loughran commented on HADOOP-14598:
-

Can I get a review of this? I do consider it a blocker for the next releases. 
Thx

> Wasb connection failing: FsUrlConnection cannot be cast to HttpURLConnection
> 
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >