[jira] [Commented] (HADOOP-15471) Hdfs recursive listing operation is very slow

2018-05-16 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478429#comment-16478429
 ] 

Mukul Kumar Singh commented on HADOOP-15471:


Thanks for the  [^HDFS-13398.002.patch] [~ajaysachdev]. Some major comments on 
the patch.

1) The current recursion in Command@recursePath uses the depth variable to 
recurse down the tree. I feel we should synchronize and localize the 
modification to this variable.
2) Apache Hadoop uses, 2 spaces for indentation. Please use the same coding 
guidelines in the patch.
3) Lets also add a unit test for this patch, We can add a unit test where a 
multi level directory structure is parsed through both the current method as 
well as the new method in the patch and lets compare the results to verify the 
validity of the patch.
4) Also I feel in place of a config variable "fs.threads", number of threads 
should be made a command line argument, so that the user can control the number 
of threads for each invocation of the command.

> Hdfs recursive listing operation is very slow
> -
>
> Key: HADOOP-15471
> URL: https://issues.apache.org/jira/browse/HADOOP-15471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, HDFS-13398.002.patch, 
> parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15430) hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2018-05-16 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478404#comment-16478404
 ] 

Aaron Fabbri commented on HADOOP-15430:
---

Great work here. Test code is very nice.
{quote}+ // here the problem: the URI constructor doesn't strip trailing "/" 
chars
{quote}
Sigh. Unfortunate. 

 
{noformat}
 
+  @Override
+  public Path makeQualified(final Path path) {
+Path q = super.makeQualified(path);
+if (!q.isRoot()) {
+  String urlString = q.toUri().toString();
+  if (urlString.endsWith(Path.SEPARATOR)) {
+// this is a path which needs root stripping off to avoid
+// confusion, See HADOOP-15430
+LOG.debug("Stripping trailing '/' from {}", q);
+// deal with an empty "/" at the end by mapping to the parent and
+// creating a new path from it
{noformat}
super minor nit.. you could omit that last comment i think, seems like you 
changed your approach?
{noformat}
+q = new Path(urlString.substring(0, urlString.length() - 1));
+  }
+}
+if (!q.isRoot() && q.getName().isEmpty()) {
+  q = q.getParent();
+}
{noformat}
This part, I'm assuming is for the trailing double slash case, right?

Anyways +1, LGTM pending paying the checkstyle tax.

> hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
> --
>
> Key: HADOOP-15430
> URL: https://issues.apache.org/jira/browse/HADOOP-15430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15430-001.patch, HADOOP-15430-002.patch, 
> HADOOP-15430-003.patch
>
>
> if you call {{hadoop fs -mkdir -p path/}} on the command line with a path 
> ending in "/:. you get a DDB error "An AttributeValue may not contain an 
> empty string"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478339#comment-16478339
 ] 

genericqa commented on HADOOP-15465:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 1 new + 154 unchanged 
- 4 fixed = 155 total (was 158) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15465 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923799/HADOOP-15465.v2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478335#comment-16478335
 ] 

genericqa commented on HADOOP-15465:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 16s{color} | {color:orange} root: The patch generated 1 new + 127 unchanged 
- 4 fixed = 128 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15465 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923798/HADOOP-15465.v1.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Updated] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HADOOP-15467:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

This is a Windows machine set up issue. See 
[this|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265]
 for details. Mark it as Won't Fix.

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478273#comment-16478273
 ] 

Anbang Hu commented on HADOOP-15467:


The root cause to these failures and the workaround on Windows are described 
[here|https://issues.apache.org/jira/browse/HDFS-13569?focusedCommentId=16478265=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16478265].

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> 

[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478260#comment-16478260
 ] 

genericqa commented on HADOOP-15459:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 1 new + 105 unchanged - 0 fixed = 106 total (was 105) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923794/HADOOP-15459.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 00dcb65da333 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / be53969 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14644/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14644/testReport/ |
| Max. process+thread count | 342 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14644/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478248#comment-16478248
 ] 

genericqa commented on HADOOP-15250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
|   | hadoop.crypto.key.TestKeyProviderFactory |
|   | hadoop.crypto.key.TestKeyShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15250 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922782/HADOOP-15250-branch-3.1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7a0facb4dd1b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 45dd200 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14643/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478241#comment-16478241
 ] 

Íñigo Goiri commented on HADOOP-15465:
--

I tested  [^HADOOP-15465.v2.patch] on Windows and there was a unit test 
breaking:
{{TestSymlinkLocalFSFileContext#testDanglingLinkFilePartQual}}
The "good" news is that this one breaks even without the patch.
The daily build seems to run it successfully though: 
[report|https://builds.apache.org/job/hadoop-trunk-win/467/testReport/org.apache.hadoop.fs/TestSymlinkLocalFSFileSystem/testDanglingLinkFilePartQual/].
I'm not sure how to verify this.

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch, HADOOP-15465.v2.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2018-05-16 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478237#comment-16478237
 ] 

Robert Kanter commented on HADOOP-15457:


A few more comments, and then I think we're good to go:
# Now that {{xFrameParams}} is seeded by {{setHeaders}}, we can do a little 
more cleanup by having {{setHeaders}} simply return the {{Map}} instead of 
having the caller create the {{Map}} and {{setHeaders}} returning it by 
reference.
# {{X_FRAME_ENABLED}} is no longer used and can be deleted.
# {{QuotingInputFilter#doFilter}} will be called on every request, so 
{{addHeaders}}, which is iterating through a bunch of configs and doing regex 
pattern matching, will be called on every request.  I think we should move that 
to {{QuotingInputFilter#init}} and populate a map; we can then simply add the 
headers in {{QuotingInputFilter#doFilter}}, which will be faster.
# In your tests, the first argument to {{assertEquals}} (and similar) is the 
expected value and the second argument is the actual value.  The ones where 
you're using {{getHeaderField}} are therefore reversed.  For example, it should 
be:
{code:java}
assertEquals(HttpServer2.X_XSS_PROTECTION.split(":")[1],
   conn.getHeaderField(HttpServer2.X_XSS_PROTECTION.split(":")[0]));
{code}
# 
{{System.out.println("printing:"+conn.getHeaderField(HttpServer2.X_XSS_PROTECTION));}}
 is unnecessary.  The {{assert}} statements will print out the value if they 
fail
# The {{assertNotEquals}} is redundant.  The subsequent {{assertEquals}} 
handles it (and is stricter).

> Add Security-Related HTTP Response Header in WEBUIs.
> 
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, 
> YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478215#comment-16478215
 ] 

Giovanni Matteo Fumarola edited comment on HADOOP-15465 at 5/16/18 10:57 PM:
-

Thanks [~ste...@apache.org] for the feedback, I found 
[here|https://github.com/search?q=org%3Aapache+getSymlinkCommand=Code] 
that only 
[MiniTezTestServiceCluster.java|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-ext-service-tests/src/test/java/org/apache/tez/service/MiniTezTestServiceCluster.java]
 in Tez project uses {{_getSymlinkCommand_}}.
 Thanks [~elgoiri] for helping me finding the way out of that failed tests.

[^HADOOP-15465.v1.patch] fixes the issue with the failed unit tests.

[^HADOOP-15465.v2.patch] removes an obsolete TODO. 


was (Author: giovanni.fumarola):
Thanks [~ste...@apache.org] for the feedback, I found 
[here|https://github.com/search?q=org%3Aapache+getSymlinkCommand=Code] 
that only 
[MiniTezTestServiceCluster.java|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-ext-service-tests/src/test/java/org/apache/tez/service/MiniTezTestServiceCluster.java]
 in Tez project uses {{_getSymlinkCommand_}}.
 Thanks [~elgoiri] for helping me finding the way out of that failed tests.

[^HADOOP-15465.v1.patch] fixes the issue with the failed unit tests.

[^HADOOP-15465.v2.patch] remove an obsolete TODO. 

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch, HADOOP-15465.v2.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in WEBUIs.

2018-05-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15457:
---
Summary: Add Security-Related HTTP Response Header in WEBUIs.  (was: Add 
Security-Related HTTP Response Header in Yarn WEBUIs.)

> Add Security-Related HTTP Response Header in WEBUIs.
> 
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, HADOOP-15457.002.patch, 
> YARN-8198.001.patch, YARN-8198.002.patch, YARN-8198.003.patch, 
> YARN-8198.004.patch, YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478215#comment-16478215
 ] 

Giovanni Matteo Fumarola edited comment on HADOOP-15465 at 5/16/18 10:49 PM:
-

Thanks [~ste...@apache.org] for the feedback, I found 
[here|https://github.com/search?q=org%3Aapache+getSymlinkCommand=Code] 
that only 
[MiniTezTestServiceCluster.java|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-ext-service-tests/src/test/java/org/apache/tez/service/MiniTezTestServiceCluster.java]
 in Tez project uses {{_getSymlinkCommand_}}.
 Thanks [~elgoiri] for helping me finding the way out of that failed tests.

[^HADOOP-15465.v1.patch] fixes the issue with the failed unit tests.

[^HADOOP-15465.v2.patch] remove an obsolete TODO. 


was (Author: giovanni.fumarola):
Thanks [~ste...@apache.org] for the feedback, I found 
[here|https://github.com/search?q=org%3Aapache+getSymlinkCommand=Code] 
that only 
[MiniTezTestServiceCluster.java|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-ext-service-tests/src/test/java/org/apache/tez/service/MiniTezTestServiceCluster.java]
 in Tez project uses {{_getSymlinkCommand_}}.
Thanks [~elgoiri] for helping me finding the way out of that failed tests.

[^HADOOP-15465.v1.patch] fixes the issue with the failed unit tests.

 

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch, HADOOP-15465.v2.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v2.patch

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch, HADOOP-15465.v2.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478215#comment-16478215
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15465:
---

Thanks [~ste...@apache.org] for the feedback, I found 
[here|https://github.com/search?q=org%3Aapache+getSymlinkCommand=Code] 
that only 
[MiniTezTestServiceCluster.java|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-ext-service-tests/src/test/java/org/apache/tez/service/MiniTezTestServiceCluster.java]
 in Tez project uses {{_getSymlinkCommand_}}.
Thanks [~elgoiri] for helping me finding the way out of that failed tests.

[^HADOOP-15465.v1.patch] fixes the issue with the failed unit tests.

 

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15430) hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478210#comment-16478210
 ] 

genericqa commented on HADOOP-15430:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 7 new + 47 unchanged - 
0 fixed = 54 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
45s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15430 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923764/HADOOP-15430-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 323851ae9a45 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e3b7d7a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v1.patch

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: (was: HADOOP-15465.v1.patch)

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-16 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15465:
--
Attachment: HADOOP-15465.v1.patch

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch, 
> HADOOP-15465.v1.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-16 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478191#comment-16478191
 ] 

Billie Rinaldi commented on HADOOP-15250:
-

I launched a new precommit build.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15250-branch-3.1.patch, HADOOP-15250.00.patch, 
> HADOOP-15250.01.patch, HADOOP-15250.02.patch, HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  

[jira] [Updated] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-15459:

Attachment: HADOOP-15459.002.patch

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HADOOP-15459.001.patch, HADOOP-15459.002.patch
>
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-16 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478181#comment-16478181
 ] 

Ajay Kumar commented on HADOOP-15250:
-

Test failures are not related. Both of them passed locally.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15250-branch-3.1.patch, HADOOP-15250.00.patch, 
> HADOOP-15250.01.patch, HADOOP-15250.02.patch, HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final 

[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478109#comment-16478109
 ] 

Íñigo Goiri commented on HADOOP-15467:
--

Can we avoid using {{getCanonicalHostName()}}?

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] 

[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478088#comment-16478088
 ] 

Anbang Hu commented on HADOOP-15467:


As the stack trace indicates, both tests get stuck in getCanonicalHostName() in 
InetAddress.java.
{code:java}
/**
 * Gets the fully qualified domain name for this IP address.
 * Best effort method, meaning we may not be able to return
 * the FQDN depending on the underlying system configuration.
 *
 * If there is a security manager, this method first
 * calls its {@code checkConnect} method
 * with the hostname and {@code -1}
 * as its arguments to see if the calling code is allowed to know
 * the hostname for this IP address, i.e., to connect to the host.
 * If the operation is not allowed, it will return
 * the textual representation of the IP address.
 *
 * @return the fully qualified domain name for this IP address,
 * or if the operation is not allowed by the security check,
 * the textual representation of the IP address.
 *
 * @see SecurityManager#checkConnect
 *
 * @since 1.4
 */
public String getCanonicalHostName() {{code}
After native method call getHostByAddr, it takes a long time to return the 
domain name. I am guessing daily Windows builder does not belong to a domain 
whose canonical name resolution takes a long time.

 

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> 

[jira] [Commented] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-16 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478062#comment-16478062
 ] 

Aaron Fabbri commented on HADOOP-14946:
---

[~gabor.bota] do you mind looking at this one?  If not reassign to me. Thank 
you.

> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-16 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri reassigned HADOOP-14946:
-

Assignee: Gabor Bota

> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-16 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478057#comment-16478057
 ] 

Aaron Fabbri commented on HADOOP-15469:
---

Interesting. The argument for this seems to be (1) this is a case that works 
with FileOutputCommitter and (2) this does not harm any important uses of job 
commit conflict resolution.  The current docs seem to be congruent with this:

{quote}
The Directory Committer uses the entire directory tree for conflict resolution.
If any file exists at the destination it will fail in job setup; if the 
resolution
mechanism is "replace" then all existing files will be deleted.
{quote}

I didn't notice any docs that really need updating here.

Any risks of this change?  I'm not thinking of any.


> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15469-001.patch
>
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15430) hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2018-05-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478007#comment-16478007
 ] 

Steve Loughran commented on HADOOP-15430:
-

Patch 003:
* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that, moving other uses 
where needed. We could cut it altogether.
* add direct tests of path qualification
* uncomment bits of (new) ITestS3GuardFsShell turned off to debug things.

Tested: s3a ireland w/ DDB. Some of the ITestS3GuardTool testPrune tests 
failing (HADOOP-14946)...running the same tests on trunk to see if its a 
regression or the test is just playing up...it's been very unreliable for me 
recently

> hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
> --
>
> Key: HADOOP-15430
> URL: https://issues.apache.org/jira/browse/HADOOP-15430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15430-001.patch, HADOOP-15430-002.patch, 
> HADOOP-15430-003.patch
>
>
> if you call {{hadoop fs -mkdir -p path/}} on the command line with a path 
> ending in "/:. you get a DDB error "An AttributeValue may not contain an 
> empty string"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15430) hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard

2018-05-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15430:

Attachment: HADOOP-15430-003.patch

> hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
> --
>
> Key: HADOOP-15430
> URL: https://issues.apache.org/jira/browse/HADOOP-15430
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15430-001.patch, HADOOP-15430-002.patch, 
> HADOOP-15430-003.patch
>
>
> if you call {{hadoop fs -mkdir -p path/}} on the command line with a path 
> ending in "/:. you get a DDB error "An AttributeValue may not contain an 
> empty string"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477970#comment-16477970
 ] 

Steve Loughran commented on HADOOP-14946:
-

seeing on trunk, even in standalone tests with my HADOOP-15430 patch in (its 
the thing I'm working on, see).
This the problem with failing tests, you can't see if new patches are 
regressions or not.

{code}
[INFO] 
[ERROR] Failures: 
[ERROR]   
ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandCLI:221->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 Pruned children count 
[PathMetadata{fileStatus=S3AFileStatus{path=s3a://hwdev-steve-ireland-new/test/testPruneCommandCLI/fresh;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1526499069374; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}, 
PathMetadata{fileStatus=S3AFileStatus{path=s3a://hwdev-steve-ireland-new/test/testPruneCommandCLI/stale;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1526499066615; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}] expected:<1> but was:<2>
[ERROR]   
ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testPruneCommandConf:230->AbstractS3GuardToolTestBase.testPruneCommand:201->AbstractS3GuardToolTestBase.assertMetastoreListingCount:214->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
 Pruned children count 
[PathMetadata{fileStatus=S3AFileStatus{path=s3a://hwdev-steve-ireland-new/test/testPruneCommandConf/fresh;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1526499076808; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}, 
PathMetadata{fileStatus=S3AFileStatus{path=s3a://hwdev-steve-ireland-new/test/testPruneCommandConf/stale;
 isDirectory=false; length=100; replication=1; blocksize=512; 
modification_time=1526499074053; access_time=0; owner=hdfs; group=hdfs; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=false; 
isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN; 
isDeleted=false}] expected:<1> but was:<2>
[INFO] 
[ERROR] Tests run: 21, Failures: 2, Errors: 0, Skipped: 0
{code}


> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14946) S3Guard testPruneCommandCLI can fail

2018-05-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14946:

Summary: S3Guard testPruneCommandCLI can fail  (was: S3Guard 
testPruneCommandCLI can fail in parallel runs)

> S3Guard testPruneCommandCLI can fail
> 
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477896#comment-16477896
 ] 

genericqa commented on HADOOP-15473:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15473 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923267/HDFS-13494.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 653a79569d85 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d47c09d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14640/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14640/testReport/ |
| Max. process+thread count | 1365 (vs. ulimit 

[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477866#comment-16477866
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/16/18 6:27 PM:
---

 !cpu_profile_rpc_encryption_optimize_calculateHMAC.png! 


was (Author: jojochuang):
 !cpu_profile_rpc_encryption_optimize_calculateHMAC.png|thumbnail! 

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png, 
> cpu_profile_rpc_encryption_optimize_calculateHMAC.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477866#comment-16477866
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

 !cpu_profile_rpc_encryption_optimize_calculateHMAC.png|thumbnail! 

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png, 
> cpu_profile_rpc_encryption_optimize_calculateHMAC.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-10768:
-
Attachment: cpu_profile_rpc_encryption_optimize_calculateHMAC.png

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png, 
> cpu_profile_rpc_encryption_optimize_calculateHMAC.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477850#comment-16477850
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

It looks like the majority of overhead in 
SaslCryptoCodec$Integrity.calculateHMAC() is getting Mac instance. If I make 
the Mac instance a thread local instance instead of getting one per call (the 
same pattern is used in SecretManager), the CPU% used by this method drops from 
13% to less than 1%, and RPC encryption performance improved by 23%.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477807#comment-16477807
 ] 

genericqa commented on HADOOP-15459:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 57s{color} 
| {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923707/HADOOP-15459.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f27bc033cf35 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d47c09d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14639/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14639/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14639/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-16 Thread Philip Zeyliger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477789#comment-16477789
 ] 

Philip Zeyliger commented on HADOOP-15473:
--

In dealing with the fallout of this in Impala, I was able to edit the KMS 
"init" script to add the relevant {{-D}} option. The change is at 
[https://gerrit.cloudera.org/#/c/10418/1/testdata/cluster/node_templates/cdh6/etc/init.d/kms]
 if you're interested.

Do we need to actually expose this to our (Hadoop's) users? i.e., the only 
possible use of this API within a KMS process is 
{{org.apache.hadoop.crypto.key.JavaKeyStoreProvider}}, so could we just set 
this explicitly at start-up (either via the shell scripts or programmatically), 
and avoid exposing it to the users?

(Or does a client use this API, or perhaps we have enough plugins that we need 
to be more careful?)

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> HDFS-13494.003.patch, org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1642#comment-1642
 ] 

genericqa commented on HADOOP-15217:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  9s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923703/HADOOP-15217.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8522363bdc0c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d47c09d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14638/testReport/ |
| Max. process+thread count | 1376 (vs. ulimit of 1) |
| modules | C: 

[jira] [Moved] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-16 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka moved HDFS-13494 to HADOOP-15473:
---

Affects Version/s: (was: 3.0.2)
   (was: 2.7.6)
   2.7.6
   3.0.2
  Component/s: (was: kms)
   kms
  Key: HADOOP-15473  (was: HDFS-13494)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.2, 2.7.6
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HDFS-13494.001.patch, HDFS-13494.002.patch, 
> HDFS-13494.003.patch, org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-15459:

Attachment: HADOOP-15459.001.patch

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HADOOP-15459.001.patch
>
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-15459:

Status: Patch Available  (was: Open)

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HADOOP-15459.001.patch
>
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477570#comment-16477570
 ] 

genericqa commented on HADOOP-15154:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 17 unchanged - 0 fixed = 21 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15154 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923686/HADOOP-15154.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b11766f0596 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0a22860 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14636/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14636/testReport/ |
| Max. process+thread count | 1504 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14636/console |
| Powered by | Apache Yetus 

[jira] [Updated] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-05-16 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15217:
---
Attachment: HADOOP-15217.01.patch

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-05-16 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15217:
---
Status: Patch Available  (was: Open)

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-05-16 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HADOOP-15217:
--

Assignee: Zsolt Venczel

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477489#comment-16477489
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/16/18 2:26 PM:
---

!cpu_profile_RPC_encryption_AES.png|thumbnai!
OpenSSLCipher.update() uses 22.9% CPU
SaslCryptoCodec$Integrity.calculateHMAC() uses 13.% CPU

The microbenchmarks I ran show the performance of RPC encryption improved by 
almost 3.7x, though still around 35% slower than RPC authentication/no SASL. 
(BTW: RPC integral is a very rarely used configuration. RPC encryption/RPC 
authentication is more common)

||Configuration||RPC calls per second||
|No SASL|44,188|
|AUTHENTICATION|44,473|
|INTEGRITY|29,877|
|PRIVACY+AES/CTR/NoPadding|28,650|
|PRIVACY w/o AES|7,770|

The microbenchmark runs on a Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz in AWS, 
8 cores.

hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q AUTHENTICATION
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q INTEGRITY
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 1 -s 1 -w 60 -t 60 -m 1024 -a -q 
PRIVACY -f AES/CTR/NoPadding
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q PRIVACY

[~mmokhtar] is also helping to evaluate the performance using Impala 
benchmarks. However, so far we don't see much performance improvement at 
Impala's level.


was (Author: jojochuang):
!cpu_profile_RPC_encryption_AES.png|thumbnai!
OpenSSLCipher.update() uses 22.9% CPU
SaslCryptoCodec$Integrity.calculateHMAC() uses 13.% CPU

The microbenchmarks I ran shows the performance of RPC encryption has improved 
by almost 3.7x, though still around 35% slower than RPC authentication/no SASL. 
(BTW: RPC integral is a very rarely used configuration. RPC encryption/RPC 
authentication is more common)
No SASL: 44,188 calls/s
AUTHENTICATION: 44,473 calls/s
INTEGRITY: 29,877 calls/s
PRIVACY+AES/CTR/NoPadding: 28,650 calls/s
PRIVACY (no AES-NI): 7,770 calls/s

hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q AUTHENTICATION
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q INTEGRITY
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 1 -s 1 -w 60 -t 60 -m 1024 -a -q 
PRIVACY -f AES/CTR/NoPadding
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q PRIVACY

[~mmokhtar] is also helping to evaluate the performance using Impala 
benchmarks. However, so far we don't see much performance improvement at 
Impala's level.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC 

[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477489#comment-16477489
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/16/18 2:18 PM:
---

!cpu_profile_RPC_encryption_AES.png|thumbnai!
OpenSSLCipher.update() uses 22.9% CPU
SaslCryptoCodec$Integrity.calculateHMAC() uses 13.% CPU

The microbenchmarks I ran shows the performance of RPC encryption has improved 
by almost 3.7x, though still around 35% slower than RPC authentication/no SASL. 
(BTW: RPC integral is a very rarely used configuration. RPC encryption/RPC 
authentication is more common)
No SASL: 44,188 calls/s
AUTHENTICATION: 44,473 calls/s
INTEGRITY: 29,877 calls/s
PRIVACY+AES/CTR/NoPadding: 28,650 calls/s
PRIVACY (no AES-NI): 7,770 calls/s

hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q AUTHENTICATION
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q INTEGRITY
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 1 -s 1 -w 60 -t 60 -m 1024 -a -q 
PRIVACY -f AES/CTR/NoPadding
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q PRIVACY

[~mmokhtar] is also helping to evaluate the performance using Impala 
benchmarks. However, so far we don't see much performance improvement at 
Impala's level.


was (Author: jojochuang):
!cpu_profile_RPC_encryption_AES.png|thumbnai!
OpenSSLCipher.update() uses 22.9% CPU
SaslCryptoCodec$Integrity.calculateHMAC() uses 13.% CPU

The microbenchmarks I ran shows the performance of RPC encryption has improved 
by almost 4x, though still around 50% slower than RPC authentication/no SASL. 
(BTW: RPC integral is a very rarely used configuration. RPC encryption/RPC 
authentication is more common)
No SASL: 44,188 calls/s
AUTHENTICATION: 44,473 calls/s
INTEGRITY: 29,877 calls/s
PRIVACY+AES/CTR/NoPadding: 28,650 calls/s
PRIVACY (no AES-NI): 7,770 calls/s

hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q AUTHENTICATION
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q INTEGRITY
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 1 -s 1 -w 60 -t 60 -m 1024 -a -q 
PRIVACY -f AES/CTR/NoPadding
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q PRIVACY

[~mmokhtar] is also helping to evaluate the performance using Impala 
benchmarks. However, so far we don't see much performance improvement at 
Impala's level.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one 

[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477489#comment-16477489
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

!cpu_profile_RPC_encryption_AES.png|thumbnai!
OpenSSLCipher.update() uses 22.9% CPU
SaslCryptoCodec$Integrity.calculateHMAC() uses 13.% CPU

The microbenchmarks I ran shows the performance of RPC encryption has improved 
by almost 4x, though still around 50% slower than RPC authentication/no SASL. 
(BTW: RPC integral is a very rarely used configuration. RPC encryption/RPC 
authentication is more common)
No SASL: 44,188 calls/s
AUTHENTICATION: 44,473 calls/s
INTEGRITY: 29,877 calls/s
PRIVACY+AES/CTR/NoPadding: 28,650 calls/s
PRIVACY (no AES-NI): 7,770 calls/s

hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q AUTHENTICATION
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q INTEGRITY
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 1 -s 1 -w 60 -t 60 -m 1024 -a -q 
PRIVACY -f AES/CTR/NoPadding
hadoop jar 
/opt/cloudera/parcels/CDH/jars/hadoop-common-3.0.0-cdh6.0.0-beta1-tests.jar 
org.apache.hadoop.ipc.RPCCallBenchmark -r 4 -c 30 -s 30 -w 60 -t 60 -m 1024 -a 
-q PRIVACY

[~mmokhtar] is also helping to evaluate the performance using Impala 
benchmarks. However, so far we don't see much performance improvement at 
Impala's level.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HADOOP-15459:
---

Assignee: Rushabh S Shah

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477483#comment-16477483
 ] 

Rushabh S Shah commented on HADOOP-15459:
-

bq. Rushabh S Shah No problem.
Thank you!

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HADOOP-15459:
--

Assignee: (was: Ranith Sardar)

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477477#comment-16477477
 ] 

Ranith Sardar commented on HADOOP-15459:


[~shahrs87] No problem.

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15459) KMSACLs will fail for other optype if acls is defined for one optype.

2018-05-16 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477469#comment-16477469
 ] 

Rushabh S Shah commented on HADOOP-15459:
-

Hi [~RANith], We have an internal release which is blocked on this jira.
Do you mind if I provide a patch to this jira ?

> KMSACLs will fail for other optype if acls is defined for one optype.
> -
>
> Key: HADOOP-15459
> URL: https://issues.apache.org/jira/browse/HADOOP-15459
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.3
>Reporter: Rushabh S Shah
>Assignee: Ranith Sardar
>Priority: Critical
>
> Assume subset of kms-acls xml file.
> {noformat}
>   
> default.key.acl.DECRYPT_EEK
> 
> 
>   default ACL for DECRYPT_EEK operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   
> key.acl.key1.DECRYPT_EEK
> user1
>   
>   
> default.key.acl.READ
> *
> 
>   default ACL for READ operations for all key acls that are not
>   explicitly defined.
> 
>   
> 
>   whitelist.key.acl.READ
>   hdfs
>   
> Whitelist ACL for READ operations for all keys.
>   
> 
> {noformat}
> For key {{key1}}, we restricted {{DECRYPT_EEK}} operation to only {{user1}}.
>  For other {{READ}} operation(like getMetadata), by default I still want 
> everyone to access all keys via {{default.key.acl.READ}}
>  But it doesn't allow anyone to access {{key1}} for any other READ operations.
>  As a result of this, if the admin restricted access for one opType then 
> (s)he has to define access for all other opTypes also, which is not desirable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-10768:
-
Attachment: cpu_profile_RPC_encryption_AES.png

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-16 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15154:
---
Attachment: HADOOP-15154.02.patch

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch, HADOOP-15154.02.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-16 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16477056#comment-16477056
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

There is a bug of WordUtils in commos-lang3 and that is the cause of some of 
the failed tests.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, HADOOP-10783.3.patch, 
> HADOOP-10783.4.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15460) S3A FS to add "s3a:no-existence-checks" to the builder file creation option set

2018-05-16 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476991#comment-16476991
 ] 

Stephan Ewen commented on HADOOP-15460:
---

Thanks Steve. Yes, another big part was the latency / cost of PUT requests 
through the additional HEAD requests, which become an issue for frequent 
creation of smaller (often short lived) files as we see when writing 
checkpoints in Flink.

> S3A FS to add  "s3a:no-existence-checks" to the builder file creation option 
> set
> 
>
> Key: HADOOP-15460
> URL: https://issues.apache.org/jira/browse/HADOOP-15460
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
> to create files for all existence checks to be skipped.
> This
> # eliminates a few hundred milliseconds
> # avoids any caching of negative HEAD/GET responses in the S3 load balancers.
> Callers will be expected to know what what they are doing.
> FWIW, we are doing some PUT calls in the committer which bypass this stuff, 
> for the same reason. If you've just created a directory, you know there's 
> nothing underneath, so no need to check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-16 Thread Anbang Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbang Hu updated HADOOP-15467:
---
Labels: Windows  (was: )

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[ERROR] Errors:{color}
> {color:#d04437}[ERROR] 
>