[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475252#comment-16475252
 ] 

genericqa commented on HADOOP-15465:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 13s{color} | {color:orange} root: The patch generated 3 new + 126 unchanged 
- 5 fixed = 129 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Created] (HADOOP-15468) The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.

2018-05-14 Thread Wenming He (JIRA)
Wenming He created HADOOP-15468:
---

 Summary: The setDeprecatedProperties method for the Configuration 
in Hadoop3.0.2.
 Key: HADOOP-15468
 URL: https://issues.apache.org/jira/browse/HADOOP-15468
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.2
Reporter: Wenming He
 Fix For: 3.0.2


在判断overlay变量是否存在弃用的键时,为什么他是直接判断overlay中的值 ,而不是去判断overlay中存在相同的键。这是个什么逻辑?




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475216#comment-16475216
 ] 

genericqa commented on HADOOP-15467:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15467 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923367/HDFS-13549.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4ccad47020bb 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d00a0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14626/testReport/ |
| Max. process+thread count | 1467 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14626/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> 

[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475197#comment-16475197
 ] 

genericqa commented on HADOOP-15467:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 52s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923367/HDFS-13549.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2623af9922bc 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d00a0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24195/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24195/testReport/ |
| Max. process+thread count | 1326 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24195/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475193#comment-16475193
 ] 

genericqa commented on HADOOP-15465:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 58 unchanged - 2 fixed = 60 total (was 60) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15465 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923359/HADOOP-15465.v0.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 073a6ead6e99 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d00a0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14625/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475168#comment-16475168
 ] 

Sean Mackrory commented on HADOOP-15466:


Also, root failures are unrelated. No tests because it's a docs-only patch.

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] to double check I'm not missing anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15460) S3A FS to add "s3a:no-existence-checks" to the builder file creation option set

2018-05-14 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475164#comment-16475164
 ] 

Aaron Fabbri commented on HADOOP-15460:
---

Interesting.  Any thoughts on mkdirs()?

> S3A FS to add  "s3a:no-existence-checks" to the builder file creation option 
> set
> 
>
> Key: HADOOP-15460
> URL: https://issues.apache.org/jira/browse/HADOOP-15460
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
> to create files for all existence checks to be skipped.
> This
> # eliminates a few hundred milliseconds
> # avoids any caching of negative HEAD/GET responses in the S3 load balancers.
> Callers will be expected to know what what they are doing.
> FWIW, we are doing some PUT calls in the committer which bypass this stuff, 
> for the same reason. If you've just created a directory, you know there's 
> nothing underneath, so no need to check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-15461:


Assignee: Giovanni Matteo Fumarola

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Summary: Deprecate WinUtils#Symlinks by using native java code  (was: Use 
native java code for symlinks)

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475147#comment-16475147
 ] 

genericqa commented on HADOOP-15466:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
47s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 31m 
41s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
67m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 50s{color} 
| {color:red} root generated 188 new + 1277 unchanged - 0 fixed = 1465 total 
(was 1277) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15466 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923360/HADOOP-15466.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 31e8c3c95e51 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d00a0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14624/artifact/out/branch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14624/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14624/testReport/ |
| Max. process+thread count | 1390 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14624/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: 

[jira] [Commented] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475136#comment-16475136
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15465:
---

[^HADOOP-15465.v0.proto.patch] tackles how to deprecate the winutils part.

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v0.proto.patch

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475134#comment-16475134
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

This is the build result with the latest patch. I don't know why it isn't 
reported here.
[https://builds.apache.org/job/PreCommit-HADOOP-Build/14623/]

Some of the failed tests seem to be related to commons-lang3. I will 
investigate it further.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: (was: HADOOP-15465.v0.proto.patch)

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v0.proto.patch

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-15467:


Assignee: Anbang Hu

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] 
> 

[jira] [Assigned] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-15467:


Assignee: (was: Anbang Hu)
 Key: HADOOP-15467  (was: HDFS-13549)
 Project: Hadoop Common  (was: Hadoop HDFS)

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> 

[jira] [Commented] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475087#comment-16475087
 ] 

Íñigo Goiri commented on HADOOP-15465:
--

I think  [^HADOOP-15465.v0.patch] is the right way to do the symlink approach.
My concerns are:
* How to deprecate the winutil.exe part.
* Add a unit test for this (both Windows and Linux).

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15465:
-
Status: Patch Available  (was: Open)

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15445) TestCryptoAdminCLI test failure when upgrading to JDK8 patch 171.

2018-05-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475037#comment-16475037
 ] 

Wei-Chiu Chuang edited comment on HADOOP-15445 at 5/14/18 11:30 PM:


Hi [~ehiggs], looks like [~gabor.bota]'s got a patch ready for review for the 
same issue. 


was (Author: jojochuang):
Hi [~lmc...@apache.org], looks like [~gabor.bota]'s got a patch ready for 
review for the same issue. 

> TestCryptoAdminCLI test failure when upgrading to JDK8 patch 171.
> -
>
> Key: HADOOP-15445
> URL: https://issues.apache.org/jira/browse/HADOOP-15445
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ewan Higgs
>Priority: Major
>
> JDK8 patch 171 introduces a new feature:
> {quote}
> h3. New Features
> security-libs/javax.crypto*[!https://www.oracle.com/webfolder/s/dm/st/images/lp-external-link-arrow.png!|http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997]
>  Enhanced KeyStore Mechanisms*
> A new security property named {{jceks.key.serialFilter}} has been introduced. 
> If this filter is configured, the JCEKS KeyStore uses it during the 
> deserialization of the encrypted Key object stored inside a SecretKeyEntry. 
> If it is not configured or if the filter result is UNDECIDED (for example, 
> none of the patterns match), then the filter configured by 
> {{jdk.serialFilter}} is consulted.
> If the system property {{jceks.key.serialFilter}} is also supplied, it 
> supersedes the security property value defined here.
> The filter pattern uses the same format as {{jdk.serialFilter}}. The default 
> pattern allows {{java.lang.Enum}}, {{java.security.KeyRep}}, 
> {{java.security.KeyRep$Type}}, and {{javax.crypto.spec.SecretKeySpec}} but 
> rejects all the others.
> Customers storing a SecretKey that does not serialize to the above types must 
> modify the filter to make the key extractable.
> {quote}
> We believe this causes some test failures:
>  
> {quote}{{{color:#33}java.io.IOException: Can't recover key for myKey from 
> keystore 
> file:/{color}{color:#33}home/{color}{color:#33}jenkins/{color}{color:#33}workspace/{color}{color:#33}hadoopFullBuild/{color}{color:#33}hadoop-hdfs-project/{color}{color:#33}hadoop-hdfs/{color}{color:#33}target/{color}{color:#33}test/{color}{color:#33}data/{color}{color:#33}53406117-0132-401e-a67d-6672f1b6a14a/{color}{color:#33}test.jks
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:424)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:124)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7227)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2082)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1524)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) Caused by: 
> java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property at 
> com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352) at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136) at 
> java.security.KeyStore.getKey(KeyStore.java:1023){color}}}
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15445) TestCryptoAdminCLI test failure when upgrading to JDK8 patch 171.

2018-05-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475037#comment-16475037
 ] 

Wei-Chiu Chuang commented on HADOOP-15445:
--

Hi [~lmc...@apache.org], looks like [~gabor.bota]'s got a patch ready for 
review for the same issue. 

> TestCryptoAdminCLI test failure when upgrading to JDK8 patch 171.
> -
>
> Key: HADOOP-15445
> URL: https://issues.apache.org/jira/browse/HADOOP-15445
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ewan Higgs
>Priority: Major
>
> JDK8 patch 171 introduces a new feature:
> {quote}
> h3. New Features
> security-libs/javax.crypto*[!https://www.oracle.com/webfolder/s/dm/st/images/lp-external-link-arrow.png!|http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997]
>  Enhanced KeyStore Mechanisms*
> A new security property named {{jceks.key.serialFilter}} has been introduced. 
> If this filter is configured, the JCEKS KeyStore uses it during the 
> deserialization of the encrypted Key object stored inside a SecretKeyEntry. 
> If it is not configured or if the filter result is UNDECIDED (for example, 
> none of the patterns match), then the filter configured by 
> {{jdk.serialFilter}} is consulted.
> If the system property {{jceks.key.serialFilter}} is also supplied, it 
> supersedes the security property value defined here.
> The filter pattern uses the same format as {{jdk.serialFilter}}. The default 
> pattern allows {{java.lang.Enum}}, {{java.security.KeyRep}}, 
> {{java.security.KeyRep$Type}}, and {{javax.crypto.spec.SecretKeySpec}} but 
> rejects all the others.
> Customers storing a SecretKey that does not serialize to the above types must 
> modify the filter to make the key extractable.
> {quote}
> We believe this causes some test failures:
>  
> {quote}{{{color:#33}java.io.IOException: Can't recover key for myKey from 
> keystore 
> file:/{color}{color:#33}home/{color}{color:#33}jenkins/{color}{color:#33}workspace/{color}{color:#33}hadoopFullBuild/{color}{color:#33}hadoop-hdfs-project/{color}{color:#33}hadoop-hdfs/{color}{color:#33}target/{color}{color:#33}test/{color}{color:#33}data/{color}{color:#33}53406117-0132-401e-a67d-6672f1b6a14a/{color}{color:#33}test.jks
>  at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:424)
>  at 
> org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:124)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7227)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2082)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1524)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) Caused by: 
> java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property at 
> com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352) at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136) at 
> java.security.KeyStore.getKey(KeyStore.java:1023){color}}}
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-14 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475005#comment-16475005
 ] 

Ajay Kumar edited comment on HADOOP-15456 at 5/14/18 11:19 PM:
---

[~elek] thanks for reviewing this. My initial thoughts to have separate image 
of ozone security was to remove any dependency on hadoop-runner image. It will 
allow us to modify ozone image if required more freely but i am open to merging 
this with hadoop-runner branch for time being and fork it later if required.
{quote}As I see the only non compatible change between the existing 
apache/hadoop-runner and your base image is that you removed the 'USER hadoop'. 
Is there any reason for that?
{quote}
Datanode needs to be started with root user. since it is for testing purpose 
only i think its ok to run with default user without doing sudo.

{quote}There are some commented out code in the starter.sh. (eg. keystore 
download). If we don't need the wire encryptiom yet, we can simply just remove 
those lines. Also there are other disabled lines (sleep, volume permission 
fix). I am just wondering if they ara intentional{quote}
Will remove it. 
{quote}You have a loop to wait for the KDC server. I really like it as it makes 
it more safe to start the kerberized containers. Just two note: The loop should 
be executed IMHO only if KERBEROS SERVER is set. + You can add the 'KDC' word 
to the print out in the else case to make it easier to understand that we are 
waiting for the KDC...
{quote}
done
{quote}If it will be a shared runner image for both hadoop/hdds/hdfs/yarn, the 
readme should be adjusted a little.
{quote}
I think its better to have separate image for hadoop and hdds but if we choose 
to have one i can update readme.


was (Author: ajayydv):
[~elek] thanks for reviewing this. My initial thoughts to have separate image 
of ozone security was to remove any dependency on hadoop-runner image. It will 
allow us to modify ozone image if required more freely but i am open to merging 
this with hadoop-runner branch for time being and fork it later if required.

{quote}As I see the only non compatible change between the existing 
apache/hadoop-runner and your base image is that you removed the 'USER hadoop'. 
Is there any reason for that?{quote}
{quote}Datanode needs to be started with root user. since it is for testing 
purpose only i think its ok to run with default user without doing sudo.
There are some commented out code in the starter.sh. (eg. keystore download). 
If we don't need the wire encryptiom yet, we can simply just remove those 
lines. Also there are other disabled lines (sleep, volume permission fix). I am 
just wondering if they ara intentional{quote}
Will remove them. 
{quote}You have a loop to wait for the KDC server. I really like it as it makes 
it more safe to start the kerberized containers. Just two note: The loop should 
be executed IMHO only if KERBEROS SERVER is set. + You can add the 'KDC' word 
to the print out in the else case to make it easier to understand that we are 
waiting for the KDC...{quote}
done
{quote}If it will be a shared runner image for both hadoop/hdds/hdfs/yarn, the 
readme should be adjusted a little.{quote}
I think its better to have separate image for hadoop and hdds but if we choose 
to have one i can update readme.

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475032#comment-16475032
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15465:
---

Attached [^HADOOP-15465.v0.patch].

Few comments about this patch:
 * The current unit tests analyzed \{{FileUtil#symLink}} as a black box. Those 
run successfully in Linux and Windows.
 * The code run without any problem in Windows and seems to work in a Linux as 
well.
 * {\{File.createSymbolicLink}} can throw {{UnsupportedOperationException}} and 
{{FileAlreadyExistsException}}. The previous code did not take care of those 
scenarios.
 * One of the deprecated constructors in \{{MiniYarnCluster}} uses 
{{Shell#getSymlinkCommand}}. We should remove it.

 

Thoughts?

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15466:
---
Status: Patch Available  (was: Open)

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] for any additional comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15466:
---
Description: Comment in core-default.xml says seconds, but according to the 
SDK docs it's getting interpreted as milliseconds 
([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
 Pinging [~ASikaria] to double check I'm not missing anything.  (was: Comment 
in core-default.xml says seconds, but according to the SDK docs it's getting 
interpreted as milliseconds 
([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
 Pinging [~ASikaria] for any additional comment.)

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] to double check I'm not missing anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15466:
---
Attachment: HADOOP-15466.001.patch

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] for any additional comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475018#comment-16475018
 ] 

Sean Mackrory commented on HADOOP-15466:


Another thing that just occurred to me is that it starts directly with adl. and 
not fs.adl. Could make sense since we're really just piping it through to the 
ADL SDK and it's not affecting the filesystem implementation per se, but just 
thought I'd check this isn't something we should fix now rather than live with 
forever. [~ste...@apache.org] may know historical precedent there better?

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] for any additional comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-14 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15466:
--

 Summary: Correct units in adl.http.timeout
 Key: HADOOP-15466
 URL: https://issues.apache.org/jira/browse/HADOOP-15466
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Reporter: Sean Mackrory
Assignee: Sean Mackrory


Comment in core-default.xml says seconds, but according to the SDK docs it's 
getting interpreted as milliseconds 
([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
 Pinging [~ASikaria] for any additional comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v0.patch

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-14 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475005#comment-16475005
 ] 

Ajay Kumar commented on HADOOP-15456:
-

[~elek] thanks for reviewing this. My initial thoughts to have separate image 
of ozone security was to remove any dependency on hadoop-runner image. It will 
allow us to modify ozone image if required more freely but i am open to merging 
this with hadoop-runner branch for time being and fork it later if required.

{quote}As I see the only non compatible change between the existing 
apache/hadoop-runner and your base image is that you removed the 'USER hadoop'. 
Is there any reason for that?{quote}
{quote}Datanode needs to be started with root user. since it is for testing 
purpose only i think its ok to run with default user without doing sudo.
There are some commented out code in the starter.sh. (eg. keystore download). 
If we don't need the wire encryptiom yet, we can simply just remove those 
lines. Also there are other disabled lines (sleep, volume permission fix). I am 
just wondering if they ara intentional{quote}
Will remove them. 
{quote}You have a loop to wait for the KDC server. I really like it as it makes 
it more safe to start the kerberized containers. Just two note: The loop should 
be executed IMHO only if KERBEROS SERVER is set. + You can add the 'KDC' word 
to the print out in the else case to make it easier to understand that we are 
waiting for the KDC...{quote}
done
{quote}If it will be a shared runner image for both hadoop/hdds/hdfs/yarn, the 
readme should be adjusted a little.{quote}
I think its better to have separate image for hadoop and hdds but if we choose 
to have one i can update readme.

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475003#comment-16475003
 ] 

Íñigo Goiri commented on HADOOP-15465:
--

[~aw] brought up this in HADOOP-15462.
As he mentioned, this can simplify the code substantially.
I'm not sure how to deprecate the old way but we can go through that in this 
JIRA.

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475003#comment-16475003
 ] 

Íñigo Goiri edited comment on HADOOP-15465 at 5/14/18 10:58 PM:


[~aw] brought this up in HADOOP-15462.
As he mentioned, this can simplify the code substantially.
I'm not sure how to deprecate the old way but we can go through that in this 
JIRA.


was (Author: elgoiri):
[~aw] brought up this in HADOOP-15462.
As he mentioned, this can simplify the code substantially.
I'm not sure how to deprecate the old way but we can go through that in this 
JIRA.

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola reassigned HADOOP-15465:
-

Assignee: Giovanni Matteo Fumarola

> Use native java code for symlinks
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15465) Use native java code for symlinks

2018-05-14 Thread JIRA
Íñigo Goiri created HADOOP-15465:


 Summary: Use native java code for symlinks
 Key: HADOOP-15465
 URL: https://issues.apache.org/jira/browse/HADOOP-15465
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Íñigo Goiri


Hadoop uses the shell to create symbolic links. Now that Hadoop relies on Java 
7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
This Jira tracks the effort to improve the interaction between Hadoop and 
Windows Server.
 * Move away from an external process (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks);
 ** Replace by something like JNI or so;
 * Fix the build system to fully leverage cmake instead of msbuild;
 * Possible other improvements;
 * Memory and handle leaks.

  was:
This Jira tracks the effort for
 * Move away from an external process (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks);
 ** Replace by something like JNI or so;
 * Fix the build system to fully leverage cmake instead of msbuild;
 * Possible other improvements;
 * Memory and handle leaks.


> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
This Jira tracks the effort for
 * Move away from an external process (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks);
 ** Replace by something like JNI or so;
 * Fix the build system to fully leverage cmake instead of msbuild;
 * Possible other improvements;
 * Memory and handle leaks.

  was:
This Jira tracks the effort for
 * Move away from an external processes (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks)
 ** Replace by something like JNI or so
 * Fix the build system to fully leverage cmake instead of msbuild


> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort for
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
This Jira tracks the effort for
 * Move away from an external processese (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks)
 ** Replace by something like JNI or so
 * Fix the build system to fully leverage cmake instead of msbuild

  was:
[Description from ]
I did a quick investigation of the performance of WinUtils in YARN. In average 
NM calls 4.76 times per second and 65.51 per container.

 
| |Requests|Requests/sec|Requests/min|Requests/container|
|*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
|[WinUtils] Execute -help|4148|0.145|8.769|2.007|
|[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
|[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
|[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
|[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|

 Interval: 7 hours, 53 minutes and 48 seconds

Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.

This means *666.58* IO ops/second due to WinUtils.

We should start considering to remove WinUtils from Hadoop and creating a JNI 
interface.


> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort for
>  * Move away from an external processese (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks)
>  ** Replace by something like JNI or so
>  * Fix the build system to fully leverage cmake instead of msbuild



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
This Jira tracks the effort for
 * Move away from an external processes (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks)
 ** Replace by something like JNI or so
 * Fix the build system to fully leverage cmake instead of msbuild

  was:
This Jira tracks the effort for
 * Move away from an external processese (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks)
 ** Replace by something like JNI or so
 * Fix the build system to fully leverage cmake instead of msbuild


> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort for
>  * Move away from an external processes (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks)
>  ** Replace by something like JNI or so
>  * Fix the build system to fully leverage cmake instead of msbuild



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
[Description from ]
I did a quick investigation of the performance of WinUtils in YARN. In average 
NM calls 4.76 times per second and 65.51 per container.

 
| |Requests|Requests/sec|Requests/min|Requests/container|
|*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
|[WinUtils] Execute -help|4148|0.145|8.769|2.007|
|[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
|[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
|[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
|[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|

 Interval: 7 hours, 53 minutes and 48 seconds

Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.

This means *666.58* IO ops/second due to WinUtils.

We should start considering to remove WinUtils from Hadoop and creating a JNI 
interface.

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>
> [Description from ]
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15462) Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474620#comment-16474620
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15462:
---

Thanks [~aw] and [~elgoiri] for the feedback. 
I created a new umbrella under Hadoop project and moved the existing ones under 
it.

We will create new subtasks for 
 * Replace by native Java APIs (e.g., symlinks)
 * Fix the build system to fully leverage cmake instead of msbuild
 * Other possible improvements.

I will start working on the replacement of the winutils calls with Java API 
calls and validate the improvements in performance.

 

> Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15462
> URL: https://issues.apache.org/jira/browse/HADOOP-15462
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Issue Type: New Feature  (was: Bug)

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15463) [Java] Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15463:
--
Parent: HADOOP-15461  (was: HADOOP-15462)

> [Java] Create a JNI interface to interact with Windows
> --
>
> Key: HADOOP-15463
> URL: https://issues.apache.org/jira/browse/HADOOP-15463
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> This JIRA tracks the design/implementation of the Java layer for the JNI 
> interface to interact with Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15464) [C] Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15464:
--
Parent: HADOOP-15461  (was: HADOOP-15462)

> [C] Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15464
> URL: https://issues.apache.org/jira/browse/HADOOP-15464
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15462) Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15462:
--
Parent: HADOOP-15461
Issue Type: Sub-task  (was: New Feature)

> Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15462
> URL: https://issues.apache.org/jira/browse/HADOOP-15462
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15462) Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola moved YARN-8275 to HADOOP-15462:
-

Component/s: (was: nodemanager)
Key: HADOOP-15462  (was: YARN-8275)
Project: Hadoop Common  (was: Hadoop YARN)

> Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15462
> URL: https://issues.apache.org/jira/browse/HADOOP-15462
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15463) [Java] Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola moved YARN-8281 to HADOOP-15463:
-

Key: HADOOP-15463  (was: YARN-8281)
Project: Hadoop Common  (was: Hadoop YARN)

> [Java] Create a JNI interface to interact with Windows
> --
>
> Key: HADOOP-15463
> URL: https://issues.apache.org/jira/browse/HADOOP-15463
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> This JIRA tracks the design/implementation of the Java layer for the JNI 
> interface to interact with Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15464) [C] Create a JNI interface to interact with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola moved YARN-8282 to HADOOP-15464:
-

Key: HADOOP-15464  (was: YARN-8282)
Project: Hadoop Common  (was: Hadoop YARN)

> [C] Create a JNI interface to interact with Windows
> ---
>
> Key: HADOOP-15464
> URL: https://issues.apache.org/jira/browse/HADOOP-15464
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-14 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created HADOOP-15461:
-

 Summary: Improvements over the Hadoop support with Windows
 Key: HADOOP-15461
 URL: https://issues.apache.org/jira/browse/HADOOP-15461
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giovanni Matteo Fumarola






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-14 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474423#comment-16474423
 ] 

Daryn Sharp commented on HADOOP-10768:
--

High performance SSL protected RPC is becoming an important requirement for us. 
 I'm tinkering with swapping in netty since it supposedly has a highly 
performant SSL implementation.  Java's SSL is terrible and crypto streams 
aren't performing as well as expected.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-14 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473728#comment-16473728
 ] 

Rushabh S Shah edited comment on HADOOP-15459 at 5/14/18 3:42 PM:
--

Below is the relevant piece of code which IMO is buggy.
{code:title=Bar.java|borderStyle=solid}
  private boolean checkKeyAccess(String keyName, UserGroupInformation ugi,
  KeyOpType opType) {
Map keyAcl = keyAcls.get(keyName);
if (keyAcl == null) {// This should be  if(keyAcl == null || 
keyAcl.get(opType) == null)
  // If No key acl defined for this key, check to see if
  // there are key defaults configured for this operation
  LOG.debug("Key: {} has no ACLs defined, using defaults.", keyName);
  keyAcl = defaultKeyAcls;
}
boolean access = checkKeyAccess(keyAcl, ugi, opType);
...
{code}
Instead of key'ing just on keyname, it should consider opType also.


was (Author: shahrs87):
Below is the relevant piece of code which IMO is buggy.
{code:title=Bar.java|borderStyle=solid}
  private boolean checkKeyAccess(String keyName, UserGroupInformation ugi,
  KeyOpType opType) {
Map keyAcl = keyAcls.get(keyName);
if (keyAcl == null) {// This should be  if(keyAcl == null || 
keyAcl.get(opType) == null)
  // If No key acl defined for this key, check to see if
  // there are key defaults configured for this operation
  LOG.debug("Key: {} has no ACLs defined, using defaults.", keyName);
  keyAcl = defaultKeyAcls;
}
boolean access = checkKeyAccess(keyAcl, ugi, opType);
...
{code}
Instead of key'ing just on keyname, it should also consider opType also.

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474362#comment-16474362
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

Rev08 test passed our internal smoke tests after HBASE-20572 is applied. I'll 
spend some time today to review it. Thanks.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15460) S3A FS to add "s3a:no-existence-checks" to the builder file creation option set

2018-05-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15460:
---

 Summary: S3A FS to add  "s3a:no-existence-checks" to the builder 
file creation option set
 Key: HADOOP-15460
 URL: https://issues.apache.org/jira/browse/HADOOP-15460
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
to create files for all existence checks to be skipped.

This
# eliminates a few hundred milliseconds
# avoids any caching of negative HEAD/GET responses in the S3 load balancers.

Callers will be expected to know what what they are doing.

FWIW, we are doing some PUT calls in the committer which bypass this stuff, for 
the same reason. If you've just created a directory, you know there's nothing 
underneath, so no need to check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-05-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474063#comment-16474063
 ] 

Steve Loughran commented on HADOOP-15392:
-

I'm thinking about whether we can repeat this enough to say "block for 3.1.1", 
or indeed, how to address it. WASB is also setting up metrics in FS create, 
lots of people have been  using that for a long time, and this hasn't surfaced.

I'm going to change to a major, but we need more reproductions to say this is 
anything more than a special case. Still curious as to why its surfacing at 
all. If it's only in CDH, maybe the HBase code there isn't closing the fs 
instances?

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Blocker
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15392) S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export

2018-05-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15392:

Priority: Major  (was: Blocker)

> S3A Metrics in S3AInstrumentation Cause Memory Leaks in HBase Export
> 
>
> Key: HADOOP-15392
> URL: https://issues.apache.org/jira/browse/HADOOP-15392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Voyta
>Priority: Major
>
> While using HBase S3A Export Snapshot utility we started to experience memory 
> leaks of the process after version upgrade.
> By running code analysis we traced the cause to revision 
> 6555af81a26b0b72ec3bee7034e01f5bd84b1564 that added the following static 
> reference (singleton):
> private static MetricsSystem metricsSystem = null;
> When application uses S3AFileSystem instance that is not closed immediately 
> metrics are accumulated in this instance and memory grows without any limit.
>  
> Expectation:
>  * It would be nice to have an option to disable metrics completely as this 
> is not needed for Export Snapshot utility.
>  * Usage of S3AFileSystem should not contain any static object that can grow 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-14 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HADOOP-15459:
--

Assignee: Ranith Sardar

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-14 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473887#comment-16473887
 ] 

Szilard Nemeth commented on HADOOP-15457:
-

Hi [~kanwaljeets]! 

Thanks for the updated patch.

LGTM (+1 non-binding)

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, 
> YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-14 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473851#comment-16473851
 ] 

genericqa commented on HADOOP-15457:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 41 new + 91 unchanged - 3 fixed = 132 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923235/HADOOP-15457.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cde4a6335453 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 32cbd0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14621/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14621/testReport/ |
| Max. process+thread count | 1355 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14621/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473810#comment-16473810
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

There is a new import of commons-lang-2.6 that I didn't remove it in the 3rd 
patch. Uploaded the 4th patch addressing it.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-10783:
--
Attachment: HADOOP-10783.4.patch

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473794#comment-16473794
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

Somehow Jenkins didn't works fine the last time.

I rebased a patch for trunk.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473794#comment-16473794
 ] 

Takanobu Asanuma edited comment on HADOOP-10783 at 5/14/18 6:06 AM:


Somehow Jenkins didn't work fine the last time.

I rebased a patch for trunk.


was (Author: tasanuma0829):
Somehow Jenkins didn't works fine the last time.

I rebased a patch for trunk.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-14 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-10783:
--
Attachment: HADOOP-10783.3.patch

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org