[jira] [Created] (HADOOP-15580) ATSv2 HBase tests are failing with ClassNotFoundException

2018-07-03 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created HADOOP-15580:
--

 Summary: ATSv2 HBase tests are failing with ClassNotFoundException
 Key: HADOOP-15580
 URL: https://issues.apache.org/jira/browse/HADOOP-15580
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Rohith Sharma K S


It is seen in recent QA report that ATSv2 Hbase tests are failing with 
ClassNotFoundException. 

This looks to be regression from hadoop common patch or any other patch. We 
need to figure out which JIRA broke this and fix tests failure. 

{noformat}
ERROR] 
org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
  Time elapsed: 0.102 s  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps.setupBeforeClass(TestHBaseTimelineStorageApps.java:97)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532211#comment-16532211
 ] 

genericqa commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
2s{color} | {color:green} root generated 0 new + 1581 unchanged - 3 fixed = 
1581 total (was 1584) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}260m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-14624 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930199/HADOOP-14624.017.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4302f922cbb 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 

[jira] [Commented] (HADOOP-15531) Use commons-text instead of commons-lang in some classes to fix deprecation warnings

2018-07-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532203#comment-16532203
 ] 

Takanobu Asanuma commented on HADOOP-15531:
---

I confirmed that the failed tests passed in my local environment.

> Use commons-text instead of commons-lang in some classes to fix deprecation 
> warnings
> 
>
> Key: HADOOP-15531
> URL: https://issues.apache.org/jira/browse/HADOOP-15531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15531.1.patch
>
>
> After upgrading commons-lang from 2.6 to 3.7, some classes such as 
> \{{StringEscapeUtils}} and \{{WordUtils}} become deprecated and move to 
> commons-text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HADOOP-14624:
---
Attachment: HADOOP-14624.017.patch

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch, HADOOP-14624.017.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Ian Pickering (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532062#comment-16532062
 ] 

Ian Pickering commented on HADOOP-14624:


A method in one of the tests was using it, but it was using the old 
apache-commons Log class. Because I had to change the logger to use the slf4j 
Logger class for unrelated reasons, I had to add a new overload for it.

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15571) After HADOOP-13440, multiple filesystems/file-contexts created with the same Configuration object are forced to have the same umask

2018-07-03 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532059#comment-16532059
 ] 

Xiaoyu Yao commented on HADOOP-15571:
-

+1 for the latest patch. Thanks [~vinodkv] for reporting and fixing it. 

> After HADOOP-13440, multiple filesystems/file-contexts created with the same 
> Configuration object are forced to have the same umask
> ---
>
> Key: HADOOP-15571
> URL: https://issues.apache.org/jira/browse/HADOOP-15571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: HADOOP-15571.1.txt, HADOOP-15571.txt
>
>
> Ran into a super hard-to-debug issue due to this. [Edit: Turns out the same 
> issue as YARN-5749 that [~Tao Yang] ran into]
> h4. Issue
> Configuration conf = new Configuration();
>  fc1 = FileContext.getFileContext(uri1, conf);
>  fc2 = FileContext.getFileContext(uri2, conf);
>  fc.setUMask(umask_for_fc1); // Screws up umask for fc2 also!
> This was not the case before HADOOP-13440.
> h4. Symptoms:
> h5. Scenario I ran into
> When trying to localize a HDFS directory (hdfs:///my/dir/1.txt), NodeManager 
> tries to replicate the directory structure on the local file-system 
> ($yarn-local-dirs/filecache/my/dir/1.txt).
> Now depending on whether NM has ever done a log-aggregation (completely 
> unrelated code that sets umask to be 137 for its own files on HDFS), the 
> directories /my and /my/dir on local-fs may have different permissions. In 
> the specific case where NM did log-aggregation, /my/dir was created with 137 
> umask and so localization of 1.txt completely failed due to absent directory 
> executable permissions!
> h5. Previous scenarios:
> We ran into this before in test-cases and instead of fixing the root-cause, 
> we just fixed the test-cases: YARN-5679 / YARN-5749



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532058#comment-16532058
 ] 

Giovanni Matteo Fumarola edited comment on HADOOP-14624 at 7/3/18 11:19 PM:


Thanks [~iapicker] for the hard work.
 qq. why do you need {{logStorageContents}}?

and can you change 

* LOG.info("In directory " + curDir); ->  LOG.info("In directory {}", curDir);
* LOG.info(" file " + f.getAbsolutePath() + "; len = " + f.length()); -> 
LOG.info(" file {}; len = {}", f.getAbsolutePath() ,f.length());


was (Author: giovanni.fumarola):
Thanks [~iapicker] for the hard work.
qq. why do you need \{{logStorageContents}}?

and can you change 

*  LOG.info("In directory " + curDir); ->  LOG.info("In directory {}", curDir);

* LOG.info(" file {}; len = {}", f.getAbsolutePath() ,f.length());

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532058#comment-16532058
 ] 

Giovanni Matteo Fumarola commented on HADOOP-14624:
---

Thanks [~iapicker] for the hard work.
qq. why do you need \{{logStorageContents}}?

and can you change 

*  LOG.info("In directory " + curDir); ->  LOG.info("In directory {}", curDir);

* LOG.info(" file {}; len = {}", f.getAbsolutePath() ,f.length());

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15571) After HADOOP-13440, multiple filesystems/file-contexts created with the same Configuration object are forced to have the same umask

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532057#comment-16532057
 ] 

genericqa commented on HADOOP-15571:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15571 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930183/HADOOP-15571.1.txt |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2135b58adbc2 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c0ef7e7 |
| 

[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532050#comment-16532050
 ] 

genericqa commented on HADOOP-15546:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 3s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m 
12s{color} | {color:red} root in HADOOP-15407 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m  
1s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 
35s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 35s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
35s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930187/HADOOP-15546-HADOOP-15407-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 7466e802518d 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 538fcf8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532039#comment-16532039
 ] 

genericqa commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
1s{color} | {color:green} root generated 0 new + 1581 unchanged - 3 fixed = 
1581 total (was 1584) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}238m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-14624 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930173/HADOOP-14624.016.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c2f1cc60d577 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c0ef7e7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (HADOOP-15215) s3guard set-capacity command to fail on read/write of 0

2018-07-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531939#comment-16531939
 ] 

Hudson commented on HADOOP-15215:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14522 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14522/])
HADOOP-15215 s3guard set-capacity command to fail on read/write of 0 (fabbri: 
rev 93ac01cb59b99b84b4f1ff26c089dcb5ce1b7c89)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java


> s3guard set-capacity command to fail on read/write of 0
> ---
>
> Key: HADOOP-15215
> URL: https://issues.apache.org/jira/browse/HADOOP-15215
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15215.001.patch, HADOOP-15215.002.patch
>
>
> the command {{hadoop s3guard set-capacity -read 0  s3a://bucket}}  will get 
> all the way to the AWS SDK before it's rejected; if you pass in a value of -1 
> we fail fast.
> The CLI check should really be failing on <= 0, not < 0.
> You still get a stack trace, so it's not that important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15546) ABFS: tune imports & javadocs

2018-07-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531927#comment-16531927
 ] 

Steve Loughran commented on HADOOP-15546:
-

Patch 002
* rebased onto the current HADOOP-15407 branch
* review some more files; fix up log statements & a few IDE hints. 
* for {{AbfsOutputStream}}, try to keep exceptions as detailed as possible, in 
type and in inner causes
 look @ some tests. 
* For AbfsClient I moved to static imports of all the constants, which keeps 
the code tighter, though it could be argued keeping those refs is a good thing.

Testing. I haven't got the abfs tests to work yet. HADOOP-15579 still exists; 
if I bypass that then tests are skipping, which makes me concluded that some 
assume() are getting in the way. 

I think I'm going to have to look at the test setup next. I'd like just to move 
them all to use the contract test setup; if not, at least use ContractTestUtils 
for all their FS operations and assertions, both for less code and for useful 
info on failure. There's not enough diagnostics right now to begin to help any 
misconfigured user to get started in diagnosing what's up. Which is where I'm 
stuck right now.

> ABFS: tune imports & javadocs
> -
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> No actual code changes here; just setting things up better for >1 person 
> editing & testing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15546) ABFS: tune imports & javadocs

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15546:

Status: Patch Available  (was: Open)

> ABFS: tune imports & javadocs
> -
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> No actual code changes here; just setting things up better for >1 person 
> editing & testing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15546) ABFS: tune imports & javadocs

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15546:

Attachment: HADOOP-15546-HADOOP-15407-002.patch

> ABFS: tune imports & javadocs
> -
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch, HADOOP-15546-HADOOP-15407-002.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> No actual code changes here; just setting things up better for >1 person 
> editing & testing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15215) s3guard set-capacity command to fail on read/write of 0

2018-07-03 Thread Aaron Fabbri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-15215:
--
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

committed to trunk. Thanks for the contribution [~gabor.bota]

> s3guard set-capacity command to fail on read/write of 0
> ---
>
> Key: HADOOP-15215
> URL: https://issues.apache.org/jira/browse/HADOOP-15215
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HADOOP-15215.001.patch, HADOOP-15215.002.patch
>
>
> the command {{hadoop s3guard set-capacity -read 0  s3a://bucket}}  will get 
> all the way to the AWS SDK before it's rejected; if you pass in a value of -1 
> we fail fast.
> The CLI check should really be failing on <= 0, not < 0.
> You still get a stack trace, so it's not that important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15546) ABFS: tune imports & javadocs

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15546:

Status: Open  (was: Patch Available)

> ABFS: tune imports & javadocs
> -
>
> Key: HADOOP-15546
> URL: https://issues.apache.org/jira/browse/HADOOP-15546
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15546-001.patch, 
> HADOOP-15546-HADOOP-15407-001.patch
>
>
> Followup on HADOOP-15540 with some initial review tuning
> * ordering of imports
> * rely on azure-auth-keys.xml to store credentials (change imports, 
> docs,.gitignore)
> * log4j -> info
> * add a "." to the first sentence of all the javadocs I noticed.
> * remove @Public annotations except for some constants (which includes some 
> commitment to maintain them).
> * move the AbstractFS declarations out of the src/test/resources XML file 
> into core-default.xml for all to use
> * other IDE-suggested tweaks
> No actual code changes here; just setting things up better for >1 person 
> editing & testing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15571) After HADOOP-13440, multiple filesystems/file-contexts created with the same Configuration object are forced to have the same umask

2018-07-03 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-15571:
-
Status: Open  (was: Patch Available)

Tx for the +1 on the approach, [~xyao] and [~ste...@apache.org].

 Uploading a new patch with the test-case. It makes sure that
 - as long as no explicit FileContext.setUMask() calls are made, the conf 
updates are reflected
 - once an explicit API call is made, that takes preference over any conf 
updates

> After HADOOP-13440, multiple filesystems/file-contexts created with the same 
> Configuration object are forced to have the same umask
> ---
>
> Key: HADOOP-15571
> URL: https://issues.apache.org/jira/browse/HADOOP-15571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: HADOOP-15571.1.txt, HADOOP-15571.txt
>
>
> Ran into a super hard-to-debug issue due to this. [Edit: Turns out the same 
> issue as YARN-5749 that [~Tao Yang] ran into]
> h4. Issue
> Configuration conf = new Configuration();
>  fc1 = FileContext.getFileContext(uri1, conf);
>  fc2 = FileContext.getFileContext(uri2, conf);
>  fc.setUMask(umask_for_fc1); // Screws up umask for fc2 also!
> This was not the case before HADOOP-13440.
> h4. Symptoms:
> h5. Scenario I ran into
> When trying to localize a HDFS directory (hdfs:///my/dir/1.txt), NodeManager 
> tries to replicate the directory structure on the local file-system 
> ($yarn-local-dirs/filecache/my/dir/1.txt).
> Now depending on whether NM has ever done a log-aggregation (completely 
> unrelated code that sets umask to be 137 for its own files on HDFS), the 
> directories /my and /my/dir on local-fs may have different permissions. In 
> the specific case where NM did log-aggregation, /my/dir was created with 137 
> umask and so localization of 1.txt completely failed due to absent directory 
> executable permissions!
> h5. Previous scenarios:
> We ran into this before in test-cases and instead of fixing the root-cause, 
> we just fixed the test-cases: YARN-5679 / YARN-5749



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15571) After HADOOP-13440, multiple filesystems/file-contexts created with the same Configuration object are forced to have the same umask

2018-07-03 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-15571:
-
Attachment: HADOOP-15571.1.txt

> After HADOOP-13440, multiple filesystems/file-contexts created with the same 
> Configuration object are forced to have the same umask
> ---
>
> Key: HADOOP-15571
> URL: https://issues.apache.org/jira/browse/HADOOP-15571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: HADOOP-15571.1.txt, HADOOP-15571.txt
>
>
> Ran into a super hard-to-debug issue due to this. [Edit: Turns out the same 
> issue as YARN-5749 that [~Tao Yang] ran into]
> h4. Issue
> Configuration conf = new Configuration();
>  fc1 = FileContext.getFileContext(uri1, conf);
>  fc2 = FileContext.getFileContext(uri2, conf);
>  fc.setUMask(umask_for_fc1); // Screws up umask for fc2 also!
> This was not the case before HADOOP-13440.
> h4. Symptoms:
> h5. Scenario I ran into
> When trying to localize a HDFS directory (hdfs:///my/dir/1.txt), NodeManager 
> tries to replicate the directory structure on the local file-system 
> ($yarn-local-dirs/filecache/my/dir/1.txt).
> Now depending on whether NM has ever done a log-aggregation (completely 
> unrelated code that sets umask to be 137 for its own files on HDFS), the 
> directories /my and /my/dir on local-fs may have different permissions. In 
> the specific case where NM did log-aggregation, /my/dir was created with 137 
> umask and so localization of 1.txt completely failed due to absent directory 
> executable permissions!
> h5. Previous scenarios:
> We ran into this before in test-cases and instead of fixing the root-cause, 
> we just fixed the test-cases: YARN-5679 / YARN-5749



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15571) After HADOOP-13440, multiple filesystems/file-contexts created with the same Configuration object are forced to have the same umask

2018-07-03 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-15571:
-
Status: Patch Available  (was: Open)

> After HADOOP-13440, multiple filesystems/file-contexts created with the same 
> Configuration object are forced to have the same umask
> ---
>
> Key: HADOOP-15571
> URL: https://issues.apache.org/jira/browse/HADOOP-15571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: HADOOP-15571.1.txt, HADOOP-15571.txt
>
>
> Ran into a super hard-to-debug issue due to this. [Edit: Turns out the same 
> issue as YARN-5749 that [~Tao Yang] ran into]
> h4. Issue
> Configuration conf = new Configuration();
>  fc1 = FileContext.getFileContext(uri1, conf);
>  fc2 = FileContext.getFileContext(uri2, conf);
>  fc.setUMask(umask_for_fc1); // Screws up umask for fc2 also!
> This was not the case before HADOOP-13440.
> h4. Symptoms:
> h5. Scenario I ran into
> When trying to localize a HDFS directory (hdfs:///my/dir/1.txt), NodeManager 
> tries to replicate the directory structure on the local file-system 
> ($yarn-local-dirs/filecache/my/dir/1.txt).
> Now depending on whether NM has ever done a log-aggregation (completely 
> unrelated code that sets umask to be 137 for its own files on HDFS), the 
> directories /my and /my/dir on local-fs may have different permissions. In 
> the specific case where NM did log-aggregation, /my/dir was created with 137 
> umask and so localization of 1.txt completely failed due to absent directory 
> executable permissions!
> h5. Previous scenarios:
> We ran into this before in test-cases and instead of fixing the root-cause, 
> we just fixed the test-cases: YARN-5679 / YARN-5749



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15359) IPC client hang in kerberized cluster due to JDK deadlock

2018-07-03 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531858#comment-16531858
 ] 

Brahma Reddy Battula commented on HADOOP-15359:
---

bq.Incidentally, I suspect this is related to HADOOP-15530 and HADOOP-15538.

Looks three are related. cc to [~yzhangal]

Any chances of *CPU overloaded* *{color:#d04437}Or {color}OOM 
{color:#d04437}Or{color}* JVM busy with *garbage collection* 
*{color:#d04437}O{color}{color:#d04437}*r* {color}Remote debugging is enabled*( 
i.e -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=XXX).?.

 

if it's reproducible, can we use "jstack -m" to take threaddump.?

> IPC client hang in kerberized cluster due to JDK deadlock
> -
>
> Key: HADOOP-15359
> URL: https://issues.apache.org/jira/browse/HADOOP-15359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0, 2.8.0, 3.0.0
>Reporter: Xiao Chen
>Priority: Major
> Attachments: 1.jstack, 2.jstack
>
>
> In a recent internal testing, we have found a DFS client hang. Further 
> inspecting jstack shows the following:
> {noformat}
> "IPC Client (552936351) connection toHOSTNAME:8020 from PRINCIPAL" #7468 
> daemon prio=5 os_prio=0 tid=0x7f6bb306c000 nid=0x1c76e waiting for 
> monitor entry [0x7f6bc2bd6000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:444)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.Cipher.getInstance(Cipher.java:513)
> at 
> sun.security.krb5.internal.crypto.dk.Des3DkCrypto.getCipher(Des3DkCrypto.java:202)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dr(DkCrypto.java:484)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dk(DkCrypto.java:447)
> at 
> sun.security.krb5.internal.crypto.dk.DkCrypto.calculateChecksum(DkCrypto.java:413)
> at 
> sun.security.krb5.internal.crypto.Des3.calculateChecksum(Des3.java:59)
> at 
> sun.security.jgss.krb5.CipherHelper.calculateChecksum(CipherHelper.java:231)
> at 
> sun.security.jgss.krb5.MessageToken.getChecksum(MessageToken.java:466)
> at 
> sun.security.jgss.krb5.MessageToken.verifySignAndSeqNumber(MessageToken.java:374)
> at 
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(WrapToken.java:284)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:209)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:182)
> at sun.security.jgss.krb5.Krb5Context.unwrap(Krb5Context.java:1053)
> at sun.security.jgss.GSSContextImpl.unwrap(GSSContextImpl.java:403)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(GssKrb5Base.java:77)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket(SaslRpcClient.java:617)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(SaslRpcClient.java:583)
> - locked <0x83444878> (a java.nio.HeapByteBuffer)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
> - locked <0x834448c0> (a java.io.BufferedInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1113)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1006)
> {noformat}
> and at the end of jstack:
> {noformat}
> Found one Java-level deadlock:
> =
> "IPC Parameter Sending Thread #29":
>   waiting to lock monitor 0x17ff49f8 (object 0x80277040, a 
> sun.security.provider.Sun),
>   which is held by UNKNOWN_owner_addr=0x50607000
> Java stack information for the threads listed above:
> ===
> "IPC Parameter Sending Thread #29":
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:437)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> 

[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink

2018-07-03 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531835#comment-16531835
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15528:
---

{noformat}
To make it clear, this comes from switching from winutils symlink to 
FileUtil#symLink in ContainerLaunch. Right?{noformat}
Yes. 
{noformat}
The semantic change is a different discussion; that one comes from running the 
code directly before instead of adding it to the cmd/sh that launches the 
container.{noformat}
Yes. I added a bunch of folks, let's see what is their feedback.

> Deprecate ContainerLaunch#link by using FileUtil#SymLink
> 
>
> Key: HADOOP-15528
> URL: https://issues.apache.org/jira/browse/HADOOP-15528
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15528-HADOOP-15461.v1.patch, 
> HADOOP-15528-HADOOP-15461.v2.patch, HADOOP-15528-HADOOP-15461.v3.patch
>
>
> {{ContainerLaunch}} currently uses its own utility to create links (including 
> winutils).
> This should be deprecated and rely on {{FileUtil#SymLink}} which is already 
> multi-platform and pure Java.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink

2018-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531818#comment-16531818
 ] 

Íñigo Goiri commented on HADOOP-15528:
--

bq. I run this patch in Windows, and I saw a good 35% reduction in latency for 
symlink and 7000 IO ops less than before.

To make it clear, this comes from switching from {{winutils symlink}} to 
{{FileUtil#symLink}} in {{ContainerLaunch}}. Right?
The semantic change is a different discussion; that one comes from running the 
code directly before instead of adding it to the cmd/sh that launches the 
container.

> Deprecate ContainerLaunch#link by using FileUtil#SymLink
> 
>
> Key: HADOOP-15528
> URL: https://issues.apache.org/jira/browse/HADOOP-15528
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15528-HADOOP-15461.v1.patch, 
> HADOOP-15528-HADOOP-15461.v2.patch, HADOOP-15528-HADOOP-15461.v3.patch
>
>
> {{ContainerLaunch}} currently uses its own utility to create links (including 
> winutils).
> This should be deprecated and rely on {{FileUtil#SymLink}} which is already 
> multi-platform and pure Java.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink

2018-07-03 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531802#comment-16531802
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15528:
---

This Jira is changing the way a container starts.

In the old semantic, we create the directories and all the symlink *AFTER* the 
container starts.

In this new semantic, we create everything *BEFORE* the container starts. We 
will have more control over failures due to Symlink and better performance.  

I run this patch in Windows, and I saw a good 35% reduction in latency for 
symlink and 7000 IO ops less than before.

In future, we can add more retry logic over failures and possible avoiding to 
start a container when we are not able to recover from a failure.

cc. [~subru] , [~curino] , [~leftnoteasy] , [~sunilg]

> Deprecate ContainerLaunch#link by using FileUtil#SymLink
> 
>
> Key: HADOOP-15528
> URL: https://issues.apache.org/jira/browse/HADOOP-15528
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15528-HADOOP-15461.v1.patch, 
> HADOOP-15528-HADOOP-15461.v2.patch, HADOOP-15528-HADOOP-15461.v3.patch
>
>
> {{ContainerLaunch}} currently uses its own utility to create links (including 
> winutils).
> This should be deprecated and rely on {{FileUtil#SymLink}} which is already 
> multi-platform and pure Java.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HADOOP-14624:
---
Attachment: HADOOP-14624.016.patch

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch, HADOOP-14624.016.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HADOOP-14624:
---
Attachment: HADOOP-14624.015.patch

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531786#comment-16531786
 ] 

genericqa commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-14624 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14624 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930172/HADOOP-14624.015.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14854/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch, 
> HADOOP-14624.015.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15528) Deprecate ContainerLaunch#link by using FileUtil#SymLink

2018-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531784#comment-16531784
 ] 

Íñigo Goiri commented on HADOOP-15528:
--

{quote}
2 possible future improvements that we can do within this Jira or in a future 
jira:
* For each resource to create the symlink we try to create the directory. We 
should move the creation of the directory in ContainerExecutor.
* If the symlink failed we still start the container and we crash afterward. We 
have to avoid to start the container.
{quote}

[^HADOOP-15528-HADOOP-15461.v3.patch] is changing the way containers are 
launched.
I'm not sure this is OK.
[~jlowe], [~ste...@apache.org], what are your thoughts on this?
I would be inclined towards keeping the old semantic.

> Deprecate ContainerLaunch#link by using FileUtil#SymLink
> 
>
> Key: HADOOP-15528
> URL: https://issues.apache.org/jira/browse/HADOOP-15528
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15528-HADOOP-15461.v1.patch, 
> HADOOP-15528-HADOOP-15461.v2.patch, HADOOP-15528-HADOOP-15461.v3.patch
>
>
> {{ContainerLaunch}} currently uses its own utility to create links (including 
> winutils).
> This should be deprecated and rely on {{FileUtil#SymLink}} which is already 
> multi-platform and pure Java.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531765#comment-16531765
 ] 

genericqa commented on HADOOP-14624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-14624 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14624 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930163/HADOOP-14624.014.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14853/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15560:

   Resolution: Fixed
Fix Version/s: HADOOP-15047
   Status: Resolved  (was: Patch Available)

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15047
>
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15560:

Affects Version/s: HADOOP-15407

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15047
>
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15560:

Component/s: fs/azure

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: HADOOP-15047
>
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14624) Add GenericTestUtils.DelayAnswer that accept slf4j logger API

2018-07-03 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HADOOP-14624:
---
Attachment: HADOOP-14624.014.patch

> Add GenericTestUtils.DelayAnswer that accept slf4j logger API
> -
>
> Key: HADOOP-14624
> URL: https://issues.apache.org/jira/browse/HADOOP-14624
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Major
> Attachments: HADOOP-14624.001.patch, HADOOP-14624.002.patch, 
> HADOOP-14624.003.patch, HADOOP-14624.004.patch, HADOOP-14624.005.patch, 
> HADOOP-14624.006.patch, HADOOP-14624.007.patch, HADOOP-14624.008.patch, 
> HADOOP-14624.009.patch, HADOOP-14624.010.patch, HADOOP-14624.011.patch, 
> HADOOP-14624.012.patch, HADOOP-14624.013.patch, HADOOP-14624.014.patch
>
>
> Split from HADOOP-14539.
> Now GenericTestUtils.DelayAnswer only accepts commons-logging logger API. Now 
> we are migrating the APIs to slf4j, slf4j logger API should be accepted as 
> well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15359) IPC client hang in kerberized cluster due to JDK deadlock

2018-07-03 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531719#comment-16531719
 ] 

Wei-Chiu Chuang commented on HADOOP-15359:
--

Hi [~kihwal] thanks for chiming in.
Impalad is a C++ application that invokes Hadoop java libraries, so I don't see 
a Java main entrance.

We saw similar symptoms multiple times on a particular cluster, but sometimes 
the specific symptoms change from time to time (e.g. sometimes jstack detected 
deadlocks, but sometimes it didn't).

> IPC client hang in kerberized cluster due to JDK deadlock
> -
>
> Key: HADOOP-15359
> URL: https://issues.apache.org/jira/browse/HADOOP-15359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0, 2.8.0, 3.0.0
>Reporter: Xiao Chen
>Priority: Major
> Attachments: 1.jstack, 2.jstack
>
>
> In a recent internal testing, we have found a DFS client hang. Further 
> inspecting jstack shows the following:
> {noformat}
> "IPC Client (552936351) connection toHOSTNAME:8020 from PRINCIPAL" #7468 
> daemon prio=5 os_prio=0 tid=0x7f6bb306c000 nid=0x1c76e waiting for 
> monitor entry [0x7f6bc2bd6000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:444)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.Cipher.getInstance(Cipher.java:513)
> at 
> sun.security.krb5.internal.crypto.dk.Des3DkCrypto.getCipher(Des3DkCrypto.java:202)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dr(DkCrypto.java:484)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dk(DkCrypto.java:447)
> at 
> sun.security.krb5.internal.crypto.dk.DkCrypto.calculateChecksum(DkCrypto.java:413)
> at 
> sun.security.krb5.internal.crypto.Des3.calculateChecksum(Des3.java:59)
> at 
> sun.security.jgss.krb5.CipherHelper.calculateChecksum(CipherHelper.java:231)
> at 
> sun.security.jgss.krb5.MessageToken.getChecksum(MessageToken.java:466)
> at 
> sun.security.jgss.krb5.MessageToken.verifySignAndSeqNumber(MessageToken.java:374)
> at 
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(WrapToken.java:284)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:209)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:182)
> at sun.security.jgss.krb5.Krb5Context.unwrap(Krb5Context.java:1053)
> at sun.security.jgss.GSSContextImpl.unwrap(GSSContextImpl.java:403)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(GssKrb5Base.java:77)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket(SaslRpcClient.java:617)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(SaslRpcClient.java:583)
> - locked <0x83444878> (a java.nio.HeapByteBuffer)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
> - locked <0x834448c0> (a java.io.BufferedInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1113)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1006)
> {noformat}
> and at the end of jstack:
> {noformat}
> Found one Java-level deadlock:
> =
> "IPC Parameter Sending Thread #29":
>   waiting to lock monitor 0x17ff49f8 (object 0x80277040, a 
> sun.security.provider.Sun),
>   which is held by UNKNOWN_owner_addr=0x50607000
> Java stack information for the threads listed above:
> ===
> "IPC Parameter Sending Thread #29":
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:437)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.SecretKeyFactory.nextSpi(SecretKeyFactory.java:293)

[jira] [Commented] (HADOOP-15578) GridmixTestUtils uses the wrong staging directory in windows

2018-07-03 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531621#comment-16531621
 ] 

Íñigo Goiri commented on HADOOP-15578:
--

The main issue is that this is indirectly using a local path (hadoop.tmp.dir) 
as an HDFS path for staging.
We may want to change the path for the staging area instead of just tweaking 
the Windows path.
Another option is to sanitize the Windows path to make it usable for HDFS.

> GridmixTestUtils uses the wrong staging directory in windows
> 
>
> Key: HADOOP-15578
> URL: https://issues.apache.org/jira/browse/HADOOP-15578
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15578.000.patch
>
>
> {{GridmixTestUtils#createHomeAndStagingDirectory}} gets the staging area from 
> the configuration key {{mapreduce.jobtracker.staging.root.dir}}. This 
> variable depends on {{hadoop.tmp.dir}} which in Windows is set to a local 
> Windows folder. When the test tries to create the path in HDFS it gets an 
> error because the path is not compliant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531599#comment-16531599
 ] 

Steve Loughran commented on HADOOP-15560:
-

I'm going to +1 this despite my existing issues with having yet to run *any* of 
the IT tests successfully. I am now finding that one of the unit tests is 
failing too, having changed a config option.(HADOOP-15579)



> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531597#comment-16531597
 ] 

Steve Loughran commented on HADOOP-15560:
-

As usual: ignore the shaded client warnings for now.

With the POM changes, the dependencies are now a lot more minimal

{code}
 org.apache.hadoop:hadoop-azure:jar:3.2.0-SNAPSHOT
 +- org.apache.hadoop:hadoop-common:jar:3.2.0-SNAPSHOT:provided
   ***
 +- com.fasterxml.jackson.core:jackson-core:jar:2.9.5:compile
 +- com.fasterxml.jackson.core:jackson-databind:jar:2.9.5:compile
 |  \- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.5:compile
 +- org.apache.httpcomponents:httpclient:jar:4.5.2:compile
 |  \- org.apache.httpcomponents:httpcore:jar:4.4.4:compile
 +- com.microsoft.azure:azure-storage:jar:7.0.0:compile
 |  \- com.microsoft.azure:azure-keyvault-core:jar:1.0.0:compile
 +- com.google.inject:guice:jar:4.0:compile
 |  +- javax.inject:javax.inject:jar:1:compile
 |  \- aopalliance:aopalliance:jar:1.0:compile
 +- com.google.guava:guava:jar:11.0.2:compile
 +- joda-time:joda-time:jar:2.9.9:compile
 +- org.eclipse.jetty:jetty-util-ajax:jar:9.3.19.v20170502:compile
 {code}

Test Artifacts:
 {code} 
 +- junit:junit:jar:4.11:test
 |  \- org.hamcrest:hamcrest-core:jar:1.3:test
 +- org.apache.hadoop:hadoop-common:test-jar:tests:3.2.0-SNAPSHOT:test
 +- org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:3.2.0-SNAPSHOT:test
 |  +- org.apache.hadoop:hadoop-mapreduce-client-common:jar:3.2.0-SNAPSHOT:test
 |  |  +- org.apache.hadoop:hadoop-yarn-common:jar:3.2.0-SNAPSHOT:test
 |  |  |  +- org.apache.hadoop:hadoop-hdfs-client:jar:3.2.0-SNAPSHOT:test
 |  |  |  |  \- com.squareup.okhttp:okhttp:jar:2.7.5:test
 |  |  |  | \- com.squareup.okio:okio:jar:1.6.0:test
 |  |  |  +- org.apache.hadoop:hadoop-yarn-api:jar:3.2.0-SNAPSHOT:test
 |  |  |  +- com.sun.jersey:jersey-client:jar:1.19:test
 |  |  |  +- com.sun.jersey.contribs:jersey-guice:jar:1.19:test
 |  |  |  +- 
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.9.5:test
 |  |  |  \- 
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.9.5:test
 |  |  | \- com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.9.5:test
 |  |  +- org.apache.hadoop:hadoop-yarn-client:jar:3.2.0-SNAPSHOT:test
 |  |  \- org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.2.0-SNAPSHOT:test
 |  +- com.google.inject.extensions:guice-servlet:jar:4.0:test
 |  \- io.netty:netty:jar:3.10.5.Final:provided
 +- org.apache.hadoop:hadoop-distcp:jar:3.2.0-SNAPSHOT:test
 +- org.apache.hadoop:hadoop-distcp:test-jar:tests:3.2.0-SNAPSHOT:test
 \- org.mockito:mockito-all:jar:1.8.5:test

{code}

There's still a google guice dependency, even though I can't seem to find out 
where it is being used (there are no uses of guice or javax.inject in the code 
that my IDE can find)

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531570#comment-16531570
 ] 

genericqa commented on HADOOP-15560:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-15407 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
47s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m  
2s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} HADOOP-15407 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-15407 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
20s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15560 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929159/HADOOP-15407-HADOOP-15407-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 76989cd550cb 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HADOOP-15407 / 49ece30 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14852/testReport/ |
| Max. process+thread count | 257 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14852/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HADOOP-15579) ABFS: TestAbfsConfigurationFieldsValidation breaks if FS is configured in core-site

2018-07-03 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531569#comment-16531569
 ] 

Steve Loughran commented on HADOOP-15579:
-

if your (indirectly imported site config has the following setting)

{code}

  fs.azure.io.retry.min.backoff.interval
  100

{code}

you get a stack trace, 

{code}
[ERROR] 
testConfigServiceImplAnnotatedFieldsInitialized(org.apache.hadoop.fs.azurebfs.services.TestAbfsConfigurationFieldsValidation)
  Time elapsed: 0.005 s  <<< FAILURE!
java.lang.AssertionError: expected:<3000> but was:<100>
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:555)
  at org.junit.Assert.assertEquals(Assert.java:542)
  at 
org.apache.hadoop.fs.azurebfs.services.TestAbfsConfigurationFieldsValidation.testConfigServiceImplAnnotatedFieldsInitialized(TestAbfsConfigurationFieldsValidation.java:131)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
  at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
  at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
  at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
  at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
  at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
  at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
{code}

which can be tracked down to the value of the min backoff interval being out of 
range.

{code}
assertEquals(DEFAULT_MIN_BACKOFF_INTERVAL, 
abfsConfiguration.getMinBackoffIntervalMilliseconds());
{code}

Proposed

# testConfigServiceImplAnnotatedFieldsInitialized: assertEquals to include 
field names, so that assert failures can be debugged. 
# test to be a proper unit test, by creating Configuration objects without 
loading in the site defaults,

That is, using {{new Configuration(false)}}.

> ABFS: TestAbfsConfigurationFieldsValidation breaks if FS is configured in 
> core-site
> ---
>
> Key: HADOOP-15579
> URL: https://issues.apache.org/jira/browse/HADOOP-15579
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Priority: Major
>
> {{TestAbfsConfigurationFieldsValidation.testConfigServiceImplAnnotatedFieldsInitialized}}
> Will fail if you have configured any of
> the properties in your abfs defaults/core-defaults imports. It is therefore
> (a) not a unit test and (b) brittle



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15579) ABFS: TestAbfsConfigurationFieldsValidation breaks if FS is configured in core-site

2018-07-03 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15579:
---

 Summary: ABFS: TestAbfsConfigurationFieldsValidation breaks if FS 
is configured in core-site
 Key: HADOOP-15579
 URL: https://issues.apache.org/jira/browse/HADOOP-15579
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: HADOOP-15407
Reporter: Steve Loughran


{{TestAbfsConfigurationFieldsValidation.testConfigServiceImplAnnotatedFieldsInitialized}}

Will fail if you have configured any of
the properties in your abfs defaults/core-defaults imports. It is therefore
(a) not a unit test and (b) brittle



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15531) Use commons-text instead of commons-lang in some classes to fix deprecation warnings

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531507#comment-16531507
 ] 

genericqa commented on HADOOP-15531:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
26s{color} | {color:green} root generated 0 new + 1468 unchanged - 116 fixed = 
1468 total (was 1584) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 18s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
58s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |

[jira] [Updated] (HADOOP-15560) ABFS: removed dependency injection and unnecessary dependencies

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15560:

Status: Patch Available  (was: Open)

> ABFS: removed dependency injection and unnecessary dependencies
> ---
>
> Key: HADOOP-15560
> URL: https://issues.apache.org/jira/browse/HADOOP-15560
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15407-HADOOP-15407-009.patch
>
>
> # Removed dependency injection and unnecessary dependencies.
>  # Added tool to clean up test containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15281:

Description: 
Currently Distcp uploads a file by two strategies

# append parts
# copy to temp then rename


option 2 executes the following sequence in {{promoteTmpToTarget}}
{code}
if ((fs.exists(target) && !fs.delete(target, false))
|| (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
|| !fs.rename(tmpTarget, target)) {
  throw new IOException("Failed to promote tmp-file:" + tmpTarget
  + " to: " + target);
}
{code}

For any object store, that's a lot of HTTP requests; for S3A you are looking at 
12+ requests and an O(data) copy call. 

This is not a good upload strategy for any store which manifests its output 
atomically at the end of the write().

Proposed: add a switch to write direct to the dest path. either a conf option 
or a CLI option






  was:
Currently Distcp uploads a file by two strategies

# append parts
# copy to temp then rename


option 2 executes the following swquence in {{promoteTmpToTarget}}
{code}
if ((fs.exists(target) && !fs.delete(target, false))
|| (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
|| !fs.rename(tmpTarget, target)) {
  throw new IOException("Failed to promote tmp-file:" + tmpTarget
  + " to: " + target);
}
{code}

For any object store, that's a lot of HTTP requests; for S3A you are looking at 
12+ requests and an O(data) copy call. 

This is not a good upload strategy for any store which manifests its output 
atomically at the end of the write().

Proposed: add a switch to write direct to the dest path. either a conf option 
or a CLI option







> Distcp to add no-rename copy option
> ---
>
> Key: HADOOP-15281
> URL: https://issues.apache.org/jira/browse/HADOOP-15281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> Currently Distcp uploads a file by two strategies
> # append parts
> # copy to temp then rename
> option 2 executes the following sequence in {{promoteTmpToTarget}}
> {code}
> if ((fs.exists(target) && !fs.delete(target, false))
> || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent()))
> || !fs.rename(tmpTarget, target)) {
>   throw new IOException("Failed to promote tmp-file:" + tmpTarget
>   + " to: " + target);
> }
> {code}
> For any object store, that's a lot of HTTP requests; for S3A you are looking 
> at 12+ requests and an O(data) copy call. 
> This is not a good upload strategy for any store which manifests its output 
> atomically at the end of the write().
> Proposed: add a switch to write direct to the dest path. either a conf option 
> or a CLI option



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15577) Update distcp to use zero-rename s3 committers

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15577.
-
Resolution: Duplicate

We don't need the 0-rename stuff, because distcp, except in the --atomic mode, 
isn't trying to do atomic operations.

What we do need, is for distcp to not upload to a temp file and rename each one 
into place: remove that and for non-atomic uploads you eliminate the O(data) 
delay after each upload.

Closing as a duplicate of that. *as that JIRA has no code/tests, I would 
support anyone who sat down to do implement the feature*

There's also lots of work going on with HDFS to have an explicit multipart 
upload mechanism for filesystems, which can be used for a block-by-block upload 
to S3, this would improve distcp upload perf on files in HDFS > 1 block, as the 
blocks could be uploaded in parallel with locality. Keep an eye on that

> Update distcp to use zero-rename s3 committers
> --
>
> Key: HADOOP-15577
> URL: https://issues.apache.org/jira/browse/HADOOP-15577
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.0
>Reporter: Tim Sammut
>Priority: Major
>
> Hello!
> distcp through 3.1.0 appears to copy files and then rename them into their 
> final/destination filename. 
> https://issues.apache.org/jira/browse/HADOOP-13786 added support for more 
> efficient S3 committers that do not use renames. 
> Please update distcp to use these efficient committers and no renames. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15577) Update distcp to use zero-rename s3 committers

2018-07-03 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15577:

Component/s: tools/distcp

> Update distcp to use zero-rename s3 committers
> --
>
> Key: HADOOP-15577
> URL: https://issues.apache.org/jira/browse/HADOOP-15577
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.0
>Reporter: Tim Sammut
>Priority: Major
>
> Hello!
> distcp through 3.1.0 appears to copy files and then rename them into their 
> final/destination filename. 
> https://issues.apache.org/jira/browse/HADOOP-13786 added support for more 
> efficient S3 committers that do not use renames. 
> Please update distcp to use these efficient committers and no renames. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15359) IPC client hang in kerberized cluster due to JDK deadlock

2018-07-03 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531458#comment-16531458
 ] 

Kihwal Lee commented on HADOOP-15359:
-

Just curious. Where was the main thread at? Was it tearing down by any chance?

> IPC client hang in kerberized cluster due to JDK deadlock
> -
>
> Key: HADOOP-15359
> URL: https://issues.apache.org/jira/browse/HADOOP-15359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0, 2.8.0, 3.0.0
>Reporter: Xiao Chen
>Priority: Major
> Attachments: 1.jstack, 2.jstack
>
>
> In a recent internal testing, we have found a DFS client hang. Further 
> inspecting jstack shows the following:
> {noformat}
> "IPC Client (552936351) connection toHOSTNAME:8020 from PRINCIPAL" #7468 
> daemon prio=5 os_prio=0 tid=0x7f6bb306c000 nid=0x1c76e waiting for 
> monitor entry [0x7f6bc2bd6000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:444)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.Cipher.getInstance(Cipher.java:513)
> at 
> sun.security.krb5.internal.crypto.dk.Des3DkCrypto.getCipher(Des3DkCrypto.java:202)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dr(DkCrypto.java:484)
> at sun.security.krb5.internal.crypto.dk.DkCrypto.dk(DkCrypto.java:447)
> at 
> sun.security.krb5.internal.crypto.dk.DkCrypto.calculateChecksum(DkCrypto.java:413)
> at 
> sun.security.krb5.internal.crypto.Des3.calculateChecksum(Des3.java:59)
> at 
> sun.security.jgss.krb5.CipherHelper.calculateChecksum(CipherHelper.java:231)
> at 
> sun.security.jgss.krb5.MessageToken.getChecksum(MessageToken.java:466)
> at 
> sun.security.jgss.krb5.MessageToken.verifySignAndSeqNumber(MessageToken.java:374)
> at 
> sun.security.jgss.krb5.WrapToken.getDataFromBuffer(WrapToken.java:284)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:209)
> at sun.security.jgss.krb5.WrapToken.getData(WrapToken.java:182)
> at sun.security.jgss.krb5.Krb5Context.unwrap(Krb5Context.java:1053)
> at sun.security.jgss.GSSContextImpl.unwrap(GSSContextImpl.java:403)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Base.unwrap(GssKrb5Base.java:77)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.readNextRpcPacket(SaslRpcClient.java:617)
> at 
> org.apache.hadoop.security.SaslRpcClient$WrappedInputStream.read(SaslRpcClient.java:583)
> - locked <0x83444878> (a java.nio.HeapByteBuffer)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:553)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
> - locked <0x834448c0> (a java.io.BufferedInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1113)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1006)
> {noformat}
> and at the end of jstack:
> {noformat}
> Found one Java-level deadlock:
> =
> "IPC Parameter Sending Thread #29":
>   waiting to lock monitor 0x17ff49f8 (object 0x80277040, a 
> sun.security.provider.Sun),
>   which is held by UNKNOWN_owner_addr=0x50607000
> Java stack information for the threads listed above:
> ===
> "IPC Parameter Sending Thread #29":
> at java.security.Provider.getService(Provider.java:1035)
> - waiting to lock <0x80277040> (a sun.security.provider.Sun)
> at 
> sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:437)
> at 
> sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376)
> at 
> sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486)
> at javax.crypto.SecretKeyFactory.nextSpi(SecretKeyFactory.java:293)
> - locked <0x834386b8> (a java.lang.Object)
> at javax.crypto.SecretKeyFactory.(SecretKeyFactory.java:121)
> at 
> javax.crypto.SecretKeyFactory.getInstance(SecretKeyFactory.java:160)
> at 
> 

[jira] [Commented] (HADOOP-15558) Implementation of Clay Codes plugin (Coupled Layer MSR codes)

2018-07-03 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531309#comment-16531309
 ] 

genericqa commented on HADOOP-15558:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Useless object stored in variable inputPositions of method 
org.apache.hadoop.io.erasurecode.coder.ClayCodeErasureDecodingStep.doDecodeSingle(ByteBuffer[][],
 ByteBuffer[][], int, int, boolean)  At 
ClayCodeErasureDecodingStep.java:inputPositions of method 
org.apache.hadoop.io.erasurecode.coder.ClayCodeErasureDecodingStep.doDecodeSingle(ByteBuffer[][],
 ByteBuffer[][], int, int, boolean)  At ClayCodeErasureDecodingStep.java:[line 
147] |
| Failed junit tests | hadoop.io.erasurecode.codec.TestClayCodeErasureCodec |
|   | hadoop.io.erasurecode.TestECSchema |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930099/HADOOP-15558.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a4091e2a087 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | 

[jira] [Updated] (HADOOP-15558) Implementation of Clay Codes plugin (Coupled Layer MSR codes)

2018-07-03 Thread Chaitanya Mukka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaitanya Mukka updated HADOOP-15558:
-
Attachment: (was: HADOOP-15558.001.patch)

> Implementation of Clay Codes plugin (Coupled Layer MSR codes) 
> --
>
> Key: HADOOP-15558
> URL: https://issues.apache.org/jira/browse/HADOOP-15558
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Chaitanya Mukka
>Assignee: Chaitanya Mukka
>Priority: Major
> Attachments: ClayCodeCodecDesign-20180630.pdf, HADOOP-15558.001.patch
>
>
> [Clay Codes|https://www.usenix.org/conference/fast18/presentation/vajha] are 
> new erasure codes developed as a research project at Codes and Signal Design 
> Lab, IISc Bangalore. A particular Clay code, with storage overhead 1.25x, has 
> been shown to reduce repair network traffic, disk read and repair times by 
> factors of 2.9, 3.4 and 3 respectively compared to the RS codes with the same 
> parameters. 
> This Jira aims to introduce Clay Codes to HDFS-EC as one of the pluggable 
> erasure codec.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15558) Implementation of Clay Codes plugin (Coupled Layer MSR codes)

2018-07-03 Thread Chaitanya Mukka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaitanya Mukka updated HADOOP-15558:
-
Attachment: HADOOP-15558.001.patch
Status: Patch Available  (was: In Progress)

> Implementation of Clay Codes plugin (Coupled Layer MSR codes) 
> --
>
> Key: HADOOP-15558
> URL: https://issues.apache.org/jira/browse/HADOOP-15558
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Chaitanya Mukka
>Assignee: Chaitanya Mukka
>Priority: Major
> Attachments: ClayCodeCodecDesign-20180630.pdf, HADOOP-15558.001.patch
>
>
> [Clay Codes|https://www.usenix.org/conference/fast18/presentation/vajha] are 
> new erasure codes developed as a research project at Codes and Signal Design 
> Lab, IISc Bangalore. A particular Clay code, with storage overhead 1.25x, has 
> been shown to reduce repair network traffic, disk read and repair times by 
> factors of 2.9, 3.4 and 3 respectively compared to the RS codes with the same 
> parameters. 
> This Jira aims to introduce Clay Codes to HDFS-EC as one of the pluggable 
> erasure codec.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15531) Use commons-text instead of commons-lang in some classes to fix deprecation warnings

2018-07-03 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15531:
--
Attachment: HADOOP-15531.1.patch

> Use commons-text instead of commons-lang in some classes to fix deprecation 
> warnings
> 
>
> Key: HADOOP-15531
> URL: https://issues.apache.org/jira/browse/HADOOP-15531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15531.1.patch
>
>
> After upgrading commons-lang from 2.6 to 3.7, some classes such as 
> \{{StringEscapeUtils}} and \{{WordUtils}} become deprecated and move to 
> commons-text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15531) Use commons-text instead of commons-lang in some classes to fix deprecation warnings

2018-07-03 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531048#comment-16531048
 ] 

Takanobu Asanuma commented on HADOOP-15531:
---

Uploaded the 1st patch.

> Use commons-text instead of commons-lang in some classes to fix deprecation 
> warnings
> 
>
> Key: HADOOP-15531
> URL: https://issues.apache.org/jira/browse/HADOOP-15531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15531.1.patch
>
>
> After upgrading commons-lang from 2.6 to 3.7, some classes such as 
> \{{StringEscapeUtils}} and \{{WordUtils}} become deprecated and move to 
> commons-text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15531) Use commons-text instead of commons-lang in some classes to fix deprecation warnings

2018-07-03 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15531:
--
Status: Patch Available  (was: Open)

> Use commons-text instead of commons-lang in some classes to fix deprecation 
> warnings
> 
>
> Key: HADOOP-15531
> URL: https://issues.apache.org/jira/browse/HADOOP-15531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15531.1.patch
>
>
> After upgrading commons-lang from 2.6 to 3.7, some classes such as 
> \{{StringEscapeUtils}} and \{{WordUtils}} become deprecated and move to 
> commons-text.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org