[jira] [Updated] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16972:
---
Priority: Blocker  (was: Major)

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Blocker
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086064#comment-17086064
 ] 

Eric Yang commented on HADOOP-16972:


[~iwasakims] Thank you for the pointer that kms-dt is a different type from 
hdfs-dt.  Your patch is right way to address this problem in the short term.

It is not a good idea to make separate token issuer a common practice unless 
there are good reasons.  Synchronization of session becomes a problem when 
token expiration unsynchronized due to API calls at different time.  HttpFS is 
working in the absence of contacting namenode.  Hence, it is kind of ok to 
allow HttpFS manages a separate token set for a specific use case.

In theory, KMS security does not benefit from having separated token kind.  
This implementation is more for performance reason to reduce round trip with 
namenode for user credential validation.  However, there are more disadvantages 
in doing so, like unsynchronized session, and additional logic/payload to 
populate different token types to the right place.  Since Hadoop community has 
already done some of the hard work to solve the problems superficially.  This 
patch is good stop gap solution, and I would prefer to fix KMS to use global 
AuthenticationFilter to avoid session problems, and reduce config logistics.  
These changes are beyond my participation in KMS code or scope of this issue.

+1 for fixing this in 3.3.0 to prevent regression.

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085316#comment-17085316
 ] 

Eric Yang commented on HADOOP-16972:


HDFS uses Auth filter instead of the global AuthenticationFilter because 
WebHDFS issues delegation token that standard AuthenticationFilter does not 
have same capability.  It would be better to use global authentication filter 
to reduce security holes.  KMS server can either be protected using global 
authentication filter, or customize like you are suggesting.  However, I do not 
think switching filter initialization solves the root problem, where Kerberos 
tgt token is reused on two different endpoints when servers are co-located on 
the same node.  I think the unit test is passing for the wrong reason where 
realm information is not available and not triggering lookup.  Could you verify 
KDC server log to make sure that authentication lookup has in fact happened?

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2

2020-04-16 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085061#comment-17085061
 ] 

Eric Yang commented on HADOOP-16361:


Patch committed to branch-2.10 and branch-2.  [~Jim_Brennan], thanks for the 
patch.

> TestSecureLogins#testValidKerberosName fails on branch-2
> 
>
> Key: HADOOP-16361
> URL: https://issues.apache.org/jira/browse/HADOOP-16361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.10.0, 2.9.2, 2.8.5
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-16361-branch-2.10.001.patch, 
> HADOOP-16361-branch-2.10.002.patch
>
>
> This test is failing in branch-2.
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.007 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2

2020-04-15 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084351#comment-17084351
 ] 

Eric Yang commented on HADOOP-16361:


+1 for patch 002.

> TestSecureLogins#testValidKerberosName fails on branch-2
> 
>
> Key: HADOOP-16361
> URL: https://issues.apache.org/jira/browse/HADOOP-16361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.10.0, 2.9.2, 2.8.5
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-16361-branch-2.10.001.patch, 
> HADOOP-16361-branch-2.10.002.patch
>
>
> This test is failing in branch-2.
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.007 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2

2020-04-15 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084301#comment-17084301
 ] 

Eric Yang commented on HADOOP-16361:


[~Jim_Brennan] Thank you for the patch.  This change helps to validate standard 
kerberos principal format, but it discarded the negative test case, where 
zookeeper/local should not become a validate Hadoop kerberos principal due to 
missing realm information.  It would be nice to keep the negative test case and 
catch the expected exception to make sure that zookeeper/local is not 
mistakenly default to pass through Hadoop Kerberos security in future evolution 
of branch 2.

> TestSecureLogins#testValidKerberosName fails on branch-2
> 
>
> Key: HADOOP-16361
> URL: https://issues.apache.org/jira/browse/HADOOP-16361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.10.0, 2.9.2, 2.8.5
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HADOOP-16361-branch-2.10.001.patch
>
>
> This test is failing in branch-2.
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.007 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15440) Support kerberos principal name pattern for KerberosAuthenticationHandler

2020-03-23 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17065117#comment-17065117
 ] 

Eric Yang commented on HADOOP-15440:


[~hexiaoqiao] {quote}it could be checked in the following statement for this 
case IIUC.{quote}

In the patch, it has this regex pattern:
{code}String[] components = principalConfig.split("[/@]");{code}

This allows test/_HOST/test to work as service principal, which is not allowed 
by [RFC4120|https://www.ietf.org/rfc/rfc4120.txt] description.  Java JDK code 
will accept [principal name without 
realm|https://github.com/frohoff/jdk8u-jdk/blob/master/src/share/classes/javax/security/auth/kerberos/KerberosPrincipal.java#L119],
 and add default realm, if realm information is missing.  This allows the 
validation to pass through for test/_HOST/test principal as service principal, 
which does not restrictedly follow KRB_NT_SRV_XHST specification.  This 
principal is a valid Kerberos principal, but it is not a valid service 
principal.

Is this reasoning more clear?

{quote}It is true that using `hadoop.security.dns.interface` is more accurate. 
Actually this logic is implement completely in `SecurityUtil` but when I want 
to import `hadoop-common` to sub-module `hadoop-auth` it throws cyclic 
reference exception. So my question is if we need add same logic at sub-module 
`hadoop-auth` or some other solutions? Sorry I am not very familiar with this 
module. Thanks again.{quote}

I'd encounter similar inconvenience with Hadoop project structure that prevent 
code sharing between Hadoop-common and Hadoop-auth.  There might need 
duplication of the involved code pieces in hadoop-auth module to prevent 
security bugs.  It is unfortunate that the code used to live in the same hadoop 
common code base in Hadoop 0.20.x code base, then maven project restructuring 
screw things up.  We live with debris of over refactored projects.  I think it 
will be ok to bring some logic from hadoop-common to hadoop-auth for this 
issue.  Hadoop-common + hadoop-auth module merge should be treated as separate 
issue.

> Support kerberos principal name pattern for KerberosAuthenticationHandler
> -
>
> Key: HADOOP-15440
> URL: https://issues.apache.org/jira/browse/HADOOP-15440
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-15440-trunk.001.patch, HADOOP-15440.002.patch
>
>
> When setup HttpFS server or KMS server in security mode, we have to config 
> kerberos principal for these service, it doesn't support to convert Kerberos 
> principal name pattern to valid Kerberos principal names whereas 
> NameNode/DataNode and many other service can do that, so it makes confused 
> for users. so I propose to replace hostname pattern with hostname, which 
> should be fully-qualified domain name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-10 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16590.

Fix Version/s: 3.3.0
   Resolution: Fixed

[~nmarion] Thanks for the patch.  I merged pull request 1484 to trunk.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
> Fix For: 3.3.0
>
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011064#comment-17011064
 ] 

Eric Yang commented on HADOOP-16590:


Target this for 3.3.0 release.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17011062#comment-17011062
 ] 

Eric Yang commented on HADOOP-16590:


[~nmarion] Thank you for the patch.  +1 on the patch.  I think this change is a 
good solution for IBM JDK.  I doubt anyone is running 32bit IBM JDK with 
Hadoop.  The shaded plugin failure seems to be caused by running this command 
in pre-commit test:

{code}mvn --batch-mode verify -fae --batch-mode -am -pl 
hadoop-client-modules/hadoop-client-check-invariants -pl 
hadoop-client-modules/hadoop-client-check-test-invariants -pl 
hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true{code}

The failure does not appear to be related to this patch.

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-test-invariants ---
[ERROR] Found artifact with unexpected contents: 
'/home/eyang/test/hadoop/hadoop-client-modules/hadoop-client-minicluster/target/hadoop-client-minicluster-3.3.0-SNAPSHOT.jar'
Please check the following and either correct the build or update
the allowed list with reasoning.

hdfs-default.xml.orig
{code}

I will commit this patch, if there is no objections.

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16590) IBM Java has deprecated OS login module classes and OS principal classes.

2020-01-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16590:
---
Target Version/s: 3.3.0

> IBM Java has deprecated OS login module classes and OS principal classes.
> -
>
> Key: HADOOP-16590
> URL: https://issues.apache.org/jira/browse/HADOOP-16590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Nicholas Marion
>Priority: Major
>
> When building applications that rely on hadoop-commons and using IBM Java, 
> errors such as `{{Exception in thread "main" java.io.IOException: failure to 
> login}}` and `{{Unable to find JAAS 
> classes:com.ibm.security.auth.LinuxPrincipal}}` can be seen.
> IBM Java has deprecated the following OS Login Module classes:
> {code:java}
> com.ibm.security.auth.module.Win64LoginModule
> com.ibm.security.auth.module.NTLoginModule
> com.ibm.security.auth.module.AIX64LoginModule
> com.ibm.security.auth.module.AIXLoginModule
> com.ibm.security.auth.module.LinuxLoginModule
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.module.JAASLoginModule{code}
> IBM Java has deprecated the following OS Principal classes:
>  
> {code:java}
> com.ibm.security.auth.UsernamePrincipal
> com.ibm.security.auth.NTUserPrincipal
> com.ibm.security.auth.AIXPrincipal
> com.ibm.security.auth.LinuxPrincipal
> {code}
> and replaced with
> {code:java}
> com.ibm.security.auth.UsernamePrincipal{code}
> Older issue HADOOP-15765 has same issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16614) Missing leveldbjni package of aarch64 platform

2019-10-24 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16614.

Fix Version/s: 3.3.0
   Resolution: Fixed

Thank you [~seanlau] for the patch.
+1 merged to trunk.


> Missing leveldbjni package of aarch64 platform
> --
>
> Key: HADOOP-16614
> URL: https://issues.apache.org/jira/browse/HADOOP-16614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Fix For: 3.3.0
>
>
> Currently, Hadoop denpend on the *leveldbjni-all:1.8* package of 
> *org.fusesource.leveldbjni* group, but it cannot support ARM platform.
> see: [https://search.maven.org/search?q=g:org.fusesource.leveldbjni]
> Because the leveldbjni community is inactivity and the  code 
> ([https://github.com/fusesource/leveldbjni]) didn't updated a long time.I 
> will build the leveldbjni package of aarch64 platform, and upload it with 
> other platform packages of *org.fusesource.leveldbjni* to a new 
> *org.openlabtesting.leveldbjni* maven repo. In hadoop code, I will add a new 
> profile aarch64 for for automatically select the 
> *org.openlabtesting.leveldbjni* artifact group and using the aarch64 package 
> of leveldbjni when running on ARM server, this approach has no effect on 
> current code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16614) Missing leveldbjni package of aarch64 platform

2019-10-23 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957953#comment-16957953
 ] 

Eric Yang commented on HADOOP-16614:


The patch looks good to me.  I will commit if no objections.

> Missing leveldbjni package of aarch64 platform
> --
>
> Key: HADOOP-16614
> URL: https://issues.apache.org/jira/browse/HADOOP-16614
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
>
> Currently, Hadoop denpend on the *leveldbjni-all:1.8* package of 
> *org.fusesource.leveldbjni* group, but it cannot support ARM platform.
> see: [https://search.maven.org/search?q=g:org.fusesource.leveldbjni]
> Because the leveldbjni community is inactivity and the  code 
> ([https://github.com/fusesource/leveldbjni]) didn't updated a long time.I 
> will build the leveldbjni package of aarch64 platform, and upload it with 
> other platform packages of *org.fusesource.leveldbjni* to a new 
> *org.openlabtesting.leveldbjni* maven repo. In hadoop code, I will add a new 
> profile aarch64 for for automatically select the 
> *org.openlabtesting.leveldbjni* artifact group and using the aarch64 package 
> of leveldbjni when running on ARM server, this approach has no effect on 
> current code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2019-09-08 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-15922:
---
Release Note: - Fix DelegationTokenAuthentication filter for incorrectly 
double encode doAs user parameter.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, 
> HADOOP-15922.006.patch, HADOOP-15922.007.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-06 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I just committed this to trunk.  Thank you [~Prabhu Joseph].

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-08-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16900214#comment-16900214
 ] 

Eric Yang commented on HADOOP-16457:


[~Prabhu Joseph] Thank you for the patch. 

+1 Patch 002 looks good to me.  Will commit if no objections.

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: HADOOP-16457-001.patch, HADOOP-16457-002.patch
>
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15440) Support kerberos principal name pattern for KerberosAuthenticationHandler

2019-08-02 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899125#comment-16899125
 ] 

Eric Yang commented on HADOOP-15440:


[~hexiaoqiao] {quote}for case `test/_HOST/test`, it will be replaced to 
`test/$hostname/test`.{quote}

It probably should throw error if the format is not a proper kerberos service 
principal.

{quote}it is true. it seems DNS.getHosts give one choice, any suggestions? 
Thanks again.{quote}

I think Hadoop is using hadoop.security.dns.interface to determine which 
hostname to bind.  This may help for the hostname lookup.

> Support kerberos principal name pattern for KerberosAuthenticationHandler
> -
>
> Key: HADOOP-15440
> URL: https://issues.apache.org/jira/browse/HADOOP-15440
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15440-trunk.001.patch, HADOOP-15440.002.patch
>
>
> When setup HttpFS server or KMS server in security mode, we have to config 
> kerberos principal for these service, it doesn't support to convert Kerberos 
> principal name pattern to valid Kerberos principal names whereas 
> NameNode/DataNode and many other service can do that, so it makes confused 
> for users. so I propose to replace hostname pattern with hostname, which 
> should be fully-qualified domain name.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-08-02 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16214:
--

Assignee: Eric Yang

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Assignee: Eric Yang
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15440) Support kerberos principal name pattern for KerberosAuthenticationHandler

2019-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898547#comment-16898547
 ] 

Eric Yang commented on HADOOP-15440:


Not sure about this code:
{code:java}
String[] components = principalConfig.split("[/@]");
{code}
What happen if princpalConfig string is "test/test/test"?

Not sure about this code:
{code:java}
fqdn = InetAddress.getLocalHost().getCanonicalHostName();{code}
While this works fine for server with single network interface.  It can create 
problems for multi-homed network that getCanonicalHostName doesn't return the 
desired hostname.

Some context is required to make the conversion proper.  I am unsure the 
generalization is a good idea without proper context.

> Support kerberos principal name pattern for KerberosAuthenticationHandler
> -
>
> Key: HADOOP-15440
> URL: https://issues.apache.org/jira/browse/HADOOP-15440
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15440-trunk.001.patch, HADOOP-15440.002.patch
>
>
> When setup HttpFS server or KMS server in security mode, we have to config 
> kerberos principal for these service, it doesn't support to convert Kerberos 
> principal name pattern to valid Kerberos principal names whereas 
> NameNode/DataNode and many other service can do that, so it makes confused 
> for users. so I propose to replace hostname pattern with hostname, which 
> should be fully-qualified domain name.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16463) Migrate away from jsr305 jar

2019-07-25 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16463:
--

 Summary: Migrate away from jsr305 jar
 Key: HADOOP-16463
 URL: https://issues.apache.org/jira/browse/HADOOP-16463
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eric Yang


JSR305 defines several annotations that is prefixed with javax packages.  
"javax.annotation.Nullable" is also used by findbugs to suppress code style 
warnings.  "javax" is a reserved package name according to Oracle license 
agreement.  Application can not use and ship these dependencies along with a 
JRE without violating the Oracle licence agreement.  From JDK 9 and newer, 
[SecurityException|http://blog.anthavio.net/2013/11/how-many-javaxannotation-jars-is-out.html]
 would be thrown for attempt to run signed code JSR250 + JSR305.

Many developers have look for solution to address [JSR305 annotation 
issue|https://stackoverflow.com/questions/4963300/which-notnull-java-annotation-should-i-use],
 but there is no good solution at this time.  One possible solution is to use 
findbugsExcludeFile.xml to define the actual suppression and this will allow 
Hadoop to ship without jsr305 dependency.

See other references:
[Guava jsr305 issue|https://github.com/google/guava/issues/2960]
[HBase jsr305 issue|https://issues.apache.org/jira/browse/HBASE-16321]

This looks like a issue that needs to be addressed if we want to work in newer 
version of Java environment.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Priority: Minor  (was: Major)

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work with Kerberos config in hdfs-site.xml for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Summary: Hadoop does not work with Kerberos config in hdfs-site.xml for 
simple security  (was: Hadoop does not work without Kerberos for simple 
security)

> Hadoop does not work with Kerberos config in hdfs-site.xml for simple security
> --
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Minor
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892087#comment-16892087
 ] 

Eric Yang commented on HADOOP-16457:


This problem is not related to HADOOP-16354.  If 
dfs.datanode.kerberos.principal is set in namenode's hdfs-site.xml, then the 
ServiceAuthorizationManager expects the datanode username in kerberos principal 
format without checking hadoop.security.authentication == simple.  The easy 
solution is removing dfs.datanode.kerberos.principal config from hdfs-site.xml. 
 There might be enhancement in this area to make 
dfs.datanode.kerberos.principal config less abrupt to simple security setting.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16457:
--

Assignee: (was: Prabhu Joseph)

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891972#comment-16891972
 ] 

Eric Yang edited comment on HADOOP-16457 at 7/24/19 4:20 PM:
-

[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security and StaticUserWebFilter 
are in use.  Otherwise, it will prevent user from setting up a simple security 
cluster.


was (Author: eyang):
[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security is in use.  Otherwise, it 
will prevent user from setting up a simple security cluster.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891972#comment-16891972
 ] 

Eric Yang commented on HADOOP-16457:


[~Prabhu Joseph] Sorry, I missed the logic during the review process, DFSUtils 
loadSslConfiguration should check if simple security is in use.  Otherwise, it 
will prevent user from setting up a simple security cluster.

> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16457:
---
Description: 
When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless security is 
enabled or not.  This is incorrect.  When simple security is chosen and using 
StaticUserWebFilter.  AutheFilter check should not be required for datanode to 
communicate with namenode.

  was:
When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless which http filter 
initializer is chosen.  This is wrong.


> Hadoop does not work without Kerberos for simple security
> -
>
> Key: HADOOP-16457
> URL: https://issues.apache.org/jira/browse/HADOOP-16457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
> is still setup.  This prevents datanode to talk to namenode.
> Error message in namenode logs:
> {code}
> 2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
> initializers set : 
> org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
> 2019-07-24 16:06:26,212 WARN 
> SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
>  Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
> org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
> accessible by dn/eyang-5.openstacklo...@example.com
> {code}
> Errors in datanode log:
> {code}
> 2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
> {code}
> The logic in HADOOP-16354 always added AuthFilter regardless security is 
> enabled or not.  This is incorrect.  When simple security is chosen and using 
> StaticUserWebFilter.  AutheFilter check should not be required for datanode 
> to communicate with namenode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16457) Hadoop does not work without Kerberos for simple security

2019-07-24 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16457:
--

 Summary: Hadoop does not work without Kerberos for simple security
 Key: HADOOP-16457
 URL: https://issues.apache.org/jira/browse/HADOOP-16457
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Eric Yang
Assignee: Prabhu Joseph


When http filter initializers is setup to use StaticUserWebFilter, AuthFilter 
is still setup.  This prevents datanode to talk to namenode.

Error message in namenode logs:
{code}
2019-07-24 15:47:38,038 INFO org.apache.hadoop.hdfs.DFSUtil: Filter 
initializers set : 
org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer
2019-07-24 16:06:26,212 WARN 
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
 Authorization failed for hdfs (auth:SIMPLE) for protocol=interface 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only 
accessible by dn/eyang-5.openstacklo...@example.com
{code}

Errors in datanode log:
{code}
2019-07-24 16:07:01,253 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: eyang-1.openstacklocal/172.26.111.17:9000
{code}

The logic in HADOOP-16354 always added AuthFilter regardless which http filter 
initializer is chosen.  This is wrong.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16446) Rolling upgrade to Hadoop 3.2.0 breaks due to backward in-compatible change

2019-07-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890529#comment-16890529
 ] 

Eric Yang commented on HADOOP-16446:


[~lingchao] The root cause is the same as HADOOP-16444.

> Rolling upgrade to Hadoop 3.2.0 breaks due to backward in-compatible change
> ---
>
> Key: HADOOP-16446
> URL: https://issues.apache.org/jira/browse/HADOOP-16446
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: xia0c
>Priority: Major
>
> Hi,
> When I try to update Hadoop-common to the last version 3.2.0, it breaks 
> backward compatibility due to compile dependency change in commons.lang. This 
> also breaks rolling upgrades since any client implementing this - like Apache 
> Crunch. 
> -The following code will fail to run with the error 
> "java.lang.NoClassDefFoundError: 
> org/apache/commons/lang/SerializationException":
>   
> {code:java}
> public void Demo(){
> PCollection data = MemPipeline.typedCollectionOf(strings(), "a"); 
> }
> {code}
> Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16444) Updating incompatible issue

2019-07-22 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890527#comment-16890527
 ] 

Eric Yang commented on HADOOP-16444:


[~lingchao] Application developer should not depend on Hadoop transitive 
dependencies to obtain HBase jar files.  This could be very fragile when Hadoop 
needs to evolve at its own pace to use a new version of HBase for it's own 
internal service like YARN Timeline service.  The recommended approach is to 
define dependency exclusions in pom.xml for your application to prevent 
sourcing Hadoop specific version of HBase, then define HBase dependency 
separately.  This would ensure your application gets the right version of HBase 
jar file and isolated from Hadoop internal.

{code}

  org.apache.hadoop
  hadoop-common
  3.2.0
  

  org.apache.hbase
  hbase-common

  

{code}

> Updating incompatible issue
> ---
>
> Key: HADOOP-16444
> URL: https://issues.apache.org/jira/browse/HADOOP-16444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2
>Reporter: xia0c
>Priority: Major
>  Labels: performance
>
> Hi,
> When I try to update hadoop-common to the latest version 3.2.0. I got an 
> incompatible issue with hbase. It works on version 2.5.0-cdh5.3.10.
> {code:java}
> public String getFoo()
> {
>   public void Test() throws Exception{
>   HBaseTestingUtility htu1 = new HBaseTestingUtility();
>   htu1.startMiniCluster();
>   }
> }
> {code}
> Thanks a lot



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-17 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16095.

Resolution: Fixed

All related tasks have been closed, mark this as resolved. 

Thank you, [~Prabhu Joseph] for the patches.

Thank you, [~lmccay], [~sunilg], and [~jojochuang] for input and reviews.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-14 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16366:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.  I just committed this to trunk.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864261#comment-16864261
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] I see that defaultInitializers is a confusing name that throw 
me off.

+1 for patch 003.

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863555#comment-16863555
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] It seems like some redundancy in the logic, is this the same?

{code}
Set defaultInitializers = new LinkedHashSet();
if (!initializers.contains(
ProxyUserAuthenticationFilterInitializer.class.getName())) {
  defaultInitializers.add(
  ProxyUserAuthenticationFilterInitializer.class.getName());
}
defaultInitializers.add(
TimelineReaderWhitelistAuthorizationFilterInitializer.class.getName());
 {code}

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch, 
> HADOOP-16366-003.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16863238#comment-16863238
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] Thank you for the explanation from your point of view.  
SpnegoFilter code path was a good effort to centralize AuthenticationFilter 
initialization for all web application.  Except other developers have made 
added extensions to make authentication filter independent of SpnegoFilter.  
Since both code paths are in use and both are meant to cover all paths 
globally.  It may create more problems if we allow FilterHolder for 
SpnegoFilter to report something that is not running.  SpnegoFilter and 
authentication filter are attached to different web application context, 
therefore, it doesn't overlap in general.  The only case that they would 
overlap is using embedded web proxy with resource manager.  Resource manager 
servlet are written as web filters, and attaching to the same web application 
context as web proxy.  In this case, we are using authentication filter because 
webproxy keytab and principal were not specified in config.  If we report 
SpnegoFilter with null path to down stream logic, it would be incorrect because 
resource manager has authentication filter for resource manager web application 
context.

This is the reason that I object to the one line change.  Do you see any 
problem, if the one line fix is not in place?

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16366) Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer

2019-06-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862580#comment-16862580
 ] 

Eric Yang commented on HADOOP-16366:


[~Prabhu Joseph] I am not sure about renaming SPNEGO_FILTER back is necessary.  
I purposely made SPNEGO_FILTER the same as authentication filter to ensure 
there is no overlap between multiple filters that are assigned to validate 
kerberos tgt.  Hence, server side redirection would work properly.  This is 
because RM and webproxy may try to use different filters.  By making them the 
same name, only one is initialized globally.  Can you explain the reason for 
renaming this back?

> Fix TimelineReaderServer ignores ProxyUserAuthenticationFilterInitializer
> -
>
> Key: HADOOP-16366
> URL: https://issues.apache.org/jira/browse/HADOOP-16366
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16366-001.patch, HADOOP-16366-002.patch
>
>
> YARNUIV2 fails with "Request is a replay attack" when below settings 
> configured.
> {code:java}
> hadoop.security.authentication = kerberos
> hadoop.http.authentication.type = kerberos
> hadoop.http.filter.initializers = 
> org.apache.hadoop.security.AuthenticationFilterInitializer
> yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = false{code}
>  AuthenticationFilter is added twice by the Yarn UI2 Context causing the 
> issue.
> {code:java}
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> 2019-06-12 11:59:43,900 INFO webapp.RMWebAppUtil 
> (RMWebAppUtil.java:addFiltersForUI2Context(483)) - UI2 context filter 
> Name:authentication, 
> className=org.apache.hadoop.security.authentication.server.AuthenticationFilter
> {code}
>  
> Another issue with {{TimelineReaderServer}} which ignores 
> {{ProxyUserAuthenticationFilterInitializer}} when 
> {{hadoop.http.filter.initializers}} is configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16367) ApplicationHistoryServer related testcases failing

2019-06-12 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862496#comment-16862496
 ] 

Eric Yang commented on HADOOP-16367:


+1 will commit shortly.

> ApplicationHistoryServer related testcases failing
> --
>
> Key: HADOOP-16367
> URL: https://issues.apache.org/jira/browse/HADOOP-16367
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security, test
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: MAPREDUCE-7217-001.patch, YARN-9611-001.patch
>
>
> *TestMRTimelineEventHandling.testMRTimelineEventHandling fails.*
> {code:java}
> ERROR] 
> testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
>   Time elapsed: 46.337 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[AM_STAR]TED> but was:<[JOB_SUBMIT]TED>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> *TestJobHistoryEventHandler.testTimelineEventHandling* 
> {code}
> [ERROR] 
> testTimelineEventHandling(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 5.858 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testTimelineEventHandling(TestJobHistoryEventHandler.java:597)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> 

[jira] [Updated] (HADOOP-16367) ApplicationHistoryServer related testcases failing

2019-06-12 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16367:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.  I just committed YARN-9611-001 to 
trunk.

> ApplicationHistoryServer related testcases failing
> --
>
> Key: HADOOP-16367
> URL: https://issues.apache.org/jira/browse/HADOOP-16367
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security, test
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: MAPREDUCE-7217-001.patch, YARN-9611-001.patch
>
>
> *TestMRTimelineEventHandling.testMRTimelineEventHandling fails.*
> {code:java}
> ERROR] 
> testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
>   Time elapsed: 46.337 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[AM_STAR]TED> but was:<[JOB_SUBMIT]TED>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}
> *TestJobHistoryEventHandler.testTimelineEventHandling* 
> {code}
> [ERROR] 
> testTimelineEventHandling(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 5.858 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testTimelineEventHandling(TestJobHistoryEventHandler.java:597)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> 

[jira] [Commented] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861617#comment-16861617
 ] 

Eric Yang commented on HADOOP-16361:


The root cause is incorrect regex for parsing kerberos principal to trigger 
auth_to_local mapping look up.  In the test case, zookeeper/localhost is not a 
kerberos principal, but branch-2 logic will attempt to apply auth_to_local 
mapping and found no match to cause test case to fail.  The test case exposes 
the implementation issue in Hadoop's approach for parsing Kerberos principal.

According to [~daryn]'s comment in 
[HADOOP-16214|https://issues.apache.org/jira/browse/HADOOP-16214?focusedCommentId=16813851=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16813851]
 stated:

{quote}That's incorrect. It supports interop between secure clients and 
insecure servers. Insecure servers treats principals as principals, else as the 
short name used by insecure clients.{quote}

If the above statement needs to remain true, we need to refine KerberosName 
parsing strategy, and formalize 
(zookeeper/localh...@example.com).getShortName() == 
(zookeeper/localhost).getShortName().

One such implementation is offered in HADOOP-16214 patch 013, but it needs some 
work to match branch 2 implementation.  HADOOP-16214 is not committed, 
therefore, take my advice with cautions.

> TestSecureLogins#testValidKerberosName fails on branch-2
> 
>
> Key: HADOOP-16361
> URL: https://issues.apache.org/jira/browse/HADOOP-16361
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.10.0, 2.9.2, 2.8.5
>Reporter: Jim Brennan
>Priority: Major
>
> This test is failing in branch-2.
> {noformat}
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.007 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16356.

Resolution: Duplicate

> Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or 
> AuthenticationFilter
> -
>
> Key: HADOOP-16356
> URL: https://issues.apache.org/jira/browse/HADOOP-16356
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When distcp is running with webhdfs://, there is no delegation token issued 
> to mapreduce task because mapreduce task does not have kerberos tgt ticket.
> This stack trace was thrown when mapreduce task contacts webhdfs:
> {code}
> Error: org.apache.hadoop.security.AccessControlException: Authentication 
> required
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
>   at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
> {code}
> There are two proposals:
> 1. Have a API to issue delegation token to pass along to webhdfs to maintain 
> backward compatibility.
> 2. Have mapreduce task login to kerberos then perform webhdfs fetching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861579#comment-16861579
 ] 

Eric Yang commented on HADOOP-16356:


[~jojochuang] We have opt-in to first proposal to maintain backward 
compatibility to obtain delegation token and support issuing delegation token 
through impersonation in HADOOP-16354.  Close this as a duplicate.

> Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or 
> AuthenticationFilter
> -
>
> Key: HADOOP-16356
> URL: https://issues.apache.org/jira/browse/HADOOP-16356
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When distcp is running with webhdfs://, there is no delegation token issued 
> to mapreduce task because mapreduce task does not have kerberos tgt ticket.
> This stack trace was thrown when mapreduce task contacts webhdfs:
> {code}
> Error: org.apache.hadoop.security.AccessControlException: Authentication 
> required
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
>   at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
> {code}
> There are two proposals:
> 1. Have a API to issue delegation token to pass along to webhdfs to maintain 
> backward compatibility.
> 2. Have mapreduce task login to kerberos then perform webhdfs fetching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16354:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.

I just committed this to trunk.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861569#comment-16861569
 ] 

Eric Yang commented on HADOOP-16354:


+1 for patch 005.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch, HADOOP-16354-005.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860359#comment-16860359
 ] 

Eric Yang commented on HADOOP-16354:


[~Prabhu Joseph] Thank you for patch 004, it is closer to what we need, but I 
can't get it to work with lower case doas=, even though the patch seems to 
convert to lower case for doas.

{code}
[hdfs@eyang-1 hadoop-3.3.0-SNAPSHOT]$ curl --negotiate -u : "http://`hostname 
-f`:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=eyang"

{"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
 to obtain user group information: 
org.apache.hadoop.security.authorize.AuthorizationException: User: eyang is not 
allowed to impersonate eyang"}}{code}

When using doAs, then it works as expected:
{code}
[hdfs@eyang-1 hadoop-3.3.0-SNAPSHOT]$ curl --negotiate -u : "http://`hostname 
-f`:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=eyang"
{"Token":{"urlString":"HQAFZXlhbmcEaGRmcwCKAWtDUn5oigFrZ18CaAECFJ6Dq3M5Slq_QhusB9mHwZcj8axREldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuMTExLjE3OjkwMDAA"}}

[eyang@eyang-1 root]$ curl -L "http://`hostname 
-f`:50070/webhdfs/v1/user/hdfs/README.txt?op=GETFILESTATUS=HQAFZXlhbmcEaGRmcwCKAWtDUn5oigFrZ18CaAECFJ6Dq3M5Slq_QhusB9mHwZcj8axREldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuMTExLjE3OjkwMDAA"
{"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission
 denied: user=eyang, access=EXECUTE, 
inode=\"/user/hdfs\":hdfs:hdfs:drwx--"}}
{code}


> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch, HADOOP-16354-004.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16031) TestSecureLogins#testValidKerberosName fails

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860275#comment-16860275
 ] 

Eric Yang commented on HADOOP-16031:


[~Jim_Brennan] This patch does not apply to branch-2 because:

1.  When TestSecureLogins was merged, HADOOP-12751 was in branch-2.
2.  acl_to_local act as a firewall rule again after HADOOP-15959 reverted 
HADOOP-12751.
3.  acl_to_local pass through is only allowed in Hadoop 3.1.2+ by  HADOOP-15996 
(new feature).

The test case does not have a way to work in branch-2 latest anymore because 
lack of ability to allow non-matching auth_to_local rule to pass through.  It 
would be best to open a separate issue to address the gap because the branch-2 
KerberosName#getShortName() lacks the ability to handle complex non-kerberos 
name (zookeeper/localhost).

> TestSecureLogins#testValidKerberosName fails
> 
>
> Key: HADOOP-16031
> URL: https://issues.apache.org/jira/browse/HADOOP-16031
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.0, 3.3.0, 3.1.3
>
> Attachments: HADOOP-16031.01.patch
>
>
> {noformat}
> [INFO] Running org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 2.724 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins
> [ERROR] 
> testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins)  
> Time elapsed: 0.01 s  <<< ERROR!
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to zookeeper/localhost
>   at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:429)
>   at 
> org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:203)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860140#comment-16860140
 ] 

Eric Yang edited comment on HADOOP-16354 at 6/10/19 4:47 PM:
-

[~Prabhu Joseph] Test case 2 is mixed for normal distcp, and accessing distcp 
via knox gateway.  However, doAs flag is missing when requesting delegation 
token.  Hence, the token returned from webhdfs is owned by Knox user instead of 
ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, webhdfs allows doas=, but not doAs=, this must be a case insensitive 
flag to prevent accidental obtaining delegation token of knox user.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"hash of ambari-qa delegation token"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of ambari-qa delegation token"
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.


was (Author: eyang):
[~Prabhu Joseph] Test case 2 is mixed for normal distcp, and accessing distcp 
via knox gateway.  However, doAs flag is missing when requesting delegation 
token.  Hence, the token returned from webhdfs is owned by Knox user instead of 
ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"hash of ambari-qa delegation token"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of ambari-qa delegation token"
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. 

[jira] [Comment Edited] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860140#comment-16860140
 ] 

Eric Yang edited comment on HADOOP-16354 at 6/10/19 4:41 PM:
-

[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/knox?op=GETFILESTATUS=IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA;
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.


was (Author: eyang):
[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/knox?op=GETFILESTATUS=IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA;
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work for in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, 

[jira] [Comment Edited] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860140#comment-16860140
 ] 

Eric Yang edited comment on HADOOP-16354 at 6/10/19 4:43 PM:
-

[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"hash of ambari-qa delegation token"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of ambari-qa delegation token"
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.


was (Author: eyang):
[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/knox?op=GETFILESTATUS=IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA;
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for 

[jira] [Comment Edited] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860140#comment-16860140
 ] 

Eric Yang edited comment on HADOOP-16354 at 6/10/19 4:44 PM:
-

[~Prabhu Joseph] Test case 2 is mixed for normal distcp, and accessing distcp 
via knox gateway.  However, doAs flag is missing when requesting delegation 
token.  Hence, the token returned from webhdfs is owned by Knox user instead of 
ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"hash of ambari-qa delegation token"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of ambari-qa delegation token"
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.


was (Author: eyang):
[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"hash of ambari-qa delegation token"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of ambari-qa delegation token"
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is 

[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860140#comment-16860140
 ] 

Eric Yang commented on HADOOP-16354:


[~Prabhu Joseph] Test case 2 is mixed for getting delegation token, and 
accessing via knox gateway.  However, doAs flag is missing when requesting 
delegation token.  Hence, the token returned from webhdfs is owned by Knox user 
instead of ambari-qa.

We can refine the test into two separate tests.
h2.  2.1 Knox obtain delegation token for end user for cross knox distcp

The test must be written as:
{code}
[knox@pjosephdocker-1 hadoop]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs=ambari-qa;
{"Token":{"urlString":"hash of delegation token for ambari-qa user"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS=hash
 of delegation token for ambari-qa user"
{code}

The key difference is in obtaining GETDELEGATIONTOKEN operation and doAs flag 
needs to work together for knox to obtain a valid toke for the end user.  In 
the past, we allow doas= and also doAs=, this was a case insensitive flag.

h2. 2.2 Normal operation to get delegation token as end user for distcp

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl --negotiate -u : 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=hdfs;
{"Token":{"urlString":"IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA"}}
{code}

{code}
[ambari-qa@pjosephdocker-1 ~]$ curl 
"http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/user/knox?op=GETFILESTATUS=IAAEa25veARoZGZzAIoBazYZx6CKAWtaJkugjgG_jgGkFDQ2gUTATHjMfowub5bl-SqLAwxmEldFQkhERlMgZGVsZWdhdGlvbhIxNzIuMjYuNzMuMTkwOjgwMjA;
{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":1394411,"group":"hadoop","length":0,"modificationTime":1559980208213,"owner":"knox","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
{code}

The test case 2.1 must work for in AuthFilter regardless if 
ProxyUserAuthenticationFilter or AuthenticationFilter is configured to maintain 
backward compatibility.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859120#comment-16859120
 ] 

Eric Yang commented on HADOOP-16354:


If AuthFilter is extended from AuthenticationFilter, then webhdfs doesn't honor 
?doAs= flag.  This breaks compatibility: 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Proxy_Users
  When user access webhdfs via Knox.  Knox credential would be used.

I think AuthFilter should extend from ProxyUserAuthenticationFilter to ensure 
that doAs flag is honored.  AuthFilter only ignores doAs flag when DT is in use.

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch, 
> HADOOP-16354-003.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859028#comment-16859028
 ] 

Eric Yang commented on HADOOP-16354:


{code}
+// if not set, enable anonymous for pseudo authentication
+if (filterConfig.get(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED)
+== null) {
+  filterConfig.put(PseudoAuthenticationHandler.ANONYMOUS_ALLOWED, "true");
+}
{code}

Patch 002 default anonymous to allow to connect to webhdfs even when user 
configured to use hadoop.http.authentication.type != simple.  This is a 
dangerous default that may keep webhdfs open to everyone if system admin did 
not know to set hadoop.http.authentication.simple.anonymous.allowed = false.

In the code, it only allows AuthFilter to be configured, when 
ProxyUserAuthenticationFilter is not configured.  I don't think I full 
understand the reasoning for this exclusion.  Could you explain again?  Thanks

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16354) Enable AuthFilter as default for WebHdfs

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16354:
---
Issue Type: Sub-task  (was: Task)
Parent: HADOOP-16095

> Enable AuthFilter as default for WebHdfs
> 
>
> Key: HADOOP-16354
> URL: https://issues.apache.org/jira/browse/HADOOP-16354
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16354-001.patch, HADOOP-16354-002.patch
>
>
> HADOOP-16314 provides an generic option to configure 
> ProxyUserAuthenticationFilterInitializer (Kerberos + doAs support) for all 
> the services. If this is not configured, AuthenticationFIlter is used for 
> NameNode UI and WebHdfs. Will enable AuthFilter as default for WebHdfs so 
> that it is backward compatible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reopened HADOOP-16095:


Found an issue with distcp backward compatibility, opened HADOOP-16356 to track 
required changes.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16356:
--

Assignee: Prabhu Joseph

> Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or 
> AuthenticationFilter
> -
>
> Key: HADOOP-16356
> URL: https://issues.apache.org/jira/browse/HADOOP-16356
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
>
> When distcp is running with webhdfs://, there is no delegation token issued 
> to mapreduce task because mapreduce task does not have kerberos tgt ticket.
> This stack trace was thrown when mapreduce task contacts webhdfs:
> {code}
> Error: org.apache.hadoop.security.AccessControlException: Authentication 
> required
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
>   at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
> {code}
> There are two proposals:
> 1. Have a API to issue delegation token to pass along to webhdfs to maintain 
> backward compatibility.
> 2. Have mapreduce task login to kerberos then perform webhdfs fetching.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16356) Distcp with webhdfs is not working with ProxyUserAuthenticationFilter or AuthenticationFilter

2019-06-07 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16356:
--

 Summary: Distcp with webhdfs is not working with 
ProxyUserAuthenticationFilter or AuthenticationFilter
 Key: HADOOP-16356
 URL: https://issues.apache.org/jira/browse/HADOOP-16356
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Eric Yang


When distcp is running with webhdfs://, there is no delegation token issued to 
mapreduce task because mapreduce task does not have kerberos tgt ticket.

This stack trace was thrown when mapreduce task contacts webhdfs:

{code}
Error: org.apache.hadoop.security.AccessControlException: Authentication 
required
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:492)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:136)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:760)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:835)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:663)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:701)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:697)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1095)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1106)
at org.apache.hadoop.tools.mapred.CopyMapper.setup(CopyMapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:178)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:172)
{code}

There are two proposals:

1. Have a API to issue delegation token to pass along to webhdfs to maintain 
backward compatibility.
2. Have mapreduce task login to kerberos then perform webhdfs fetching.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-16095.

   Resolution: Fixed
Fix Version/s: 3.3.0

The current implementation is based on option 1.  All sub-tasks have been 
close.  Mark this issue as resolved.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~Prabhu Joseph] for the patch.
I just committed this to trunk.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, HADOOP-16314-007.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-05 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856861#comment-16856861
 ] 

Eric Yang commented on HADOOP-16314:


+1 looks good to me.  Will commit if no objections by end of day.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, HADOOP-16314-007.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856114#comment-16856114
 ] 

Eric Yang edited comment on HADOOP-16314 at 6/4/19 10:07 PM:
-

[~Prabhu Joseph] Can you upload patch 6 again as patch 7?  The latest test 
report is inaccurate because the tested patch is different from patch 6.

Thanks for the explanation and briefly scan the code in 
DelegationTokenAuthenticationFilter.  I think both filters can interoperate 
together.  UI2 works fine when retrieving logs from timeline server.


was (Author: eyang):
[~Prabhu Joseph] Can you upload patch 6 again as patch 7?  The latest test 
report is inaccurate because the tested patch is different from patch 6.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856114#comment-16856114
 ] 

Eric Yang commented on HADOOP-16314:


[~Prabhu Joseph] Can you upload patch 6 again as patch 7?  The latest test 
report is inaccurate because the tested patch is different from patch 6.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> HADOOP-16314-006.patch, Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-06-04 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855889#comment-16855889
 ] 

Eric Yang commented on HADOOP-16092:


[~elek] HADOOP-14898 did not have any +1.  Although [~chris.douglas] said lgtm 
for the process so far.  He was not sure about the release process and how we 
maintain the set of valid release targets from the ASF repo.  The rest of the 
work was happening without following Hadoop review then commit process.  For 
example, HADOOP-15083 does not have any +1 review and it is committed by [~anu].

The rest is history, and Hadoop community did not produce properly versioned 
Docker images for Hadoop or Ozone.  The source code for generating Docker image 
was not voted during Hadoop 3.1.1, Hadoop 3.2.0, Ozone 0.4.0, Ozone 0.3.0, 
releases.  It is clear that the current arrangement is error prone after one 
year of exercise, and side step Apache release policy.  I think it is important 
to revisit how to version control the docker build process to align with Apache 
release policy, this helps to prevent human errors in the release process.

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-06-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855152#comment-16855152
 ] 

Eric Yang commented on HADOOP-16314:


[~Prabhu Joseph] If I am reading patch 5 code correctly, this will ignore both 
AuthenticationFilter, and also ProxyUserAuthenticationFilter.  Is there another 
code path that is used to ensure ApplicationHistoryServer is protected?

{code}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
index 4e3a1e6..11f1b07 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
@@ -28,8 +28,10 @@
 import org.apache.hadoop.http.HttpServer2;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.metrics2.source.JvmMetrics;
+import org.apache.hadoop.security.AuthenticationFilterInitializer;
 import org.apache.hadoop.security.HttpCrossOriginFilterInitializer;
 import org.apache.hadoop.security.SecurityUtil;
+import 
org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilterInitializer;
 import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.service.Service;
 import org.apache.hadoop.util.ExitUtil;
@@ -261,8 +263,15 @@ private void startWebApp() {
 }
 TimelineServerUtils.addTimelineAuthFilter(
 initializers, defaultInitializers, secretManagerService);
+
+Set ignoreInitializers = new LinkedHashSet();
+ignoreInitializers.add(AuthenticationFilterInitializer.class.getName());
+ignoreInitializers.add(
+ProxyUserAuthenticationFilterInitializer.class.getName());
+
 TimelineServerUtils.setTimelineFilters(
-conf, initializers, defaultInitializers);
+conf, initializers, defaultInitializers, ignoreInitializers);
+
 String bindAddress = WebAppUtils.getWebAppBindURL(conf,
   YarnConfiguration.TIMELINE_SERVICE_BIND_HOST,
   WebAppUtils.getAHSWebAppURLWithoutScheme(conf));
{code}

Is there any way to make the initialization code more straight forward?

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, HADOOP-16314-004.patch, HADOOP-16314-005.patch, 
> Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-06-03 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854806#comment-16854806
 ] 

Eric Yang commented on HADOOP-16092:


[~elek] It is sad that we can not find consensus on establishing docker image 
build model for Hadoop.  Please do not let Ozone specific logic (hadoop-runner) 
held back HDFS + YARN + Mapreduce container image development.  It would be 
quite irresponsible to develop a image that had not been officially voted by 
hdfs, yarn, mapreduce community and upload the image to docker hub as Apache 
Hadoop image.

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-31 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16853263#comment-16853263
 ] 

Eric Yang commented on HADOOP-16092:


[~elek] {quote}I am afraid that is not quite true. It's heavily used by Ozone 
which is a subproject of Hadoop.{quote}

This is branding fasciation.  There is no code dependency in Hadoop common, 
HDFS, or YARN that uses Hadoop-runner project.

{quote}I am not sure If I understand this well. No uid has been changed in 
hadoop-runner recently. Only the directory of configs/logs are changed and I 
think it's still backward compatible with 0.3.0/0.4.0.{quote}

Hadoop-runner:latest image is backward compatible for now, but it has privilege 
escalation security issue.  If any data directory is mounted to host OS for 
developer to debug what is going on in the docker container, it triggers the 
security bug.  Data would be written as uid:1000, which could be someone else 
on host OS.  

Hadoop-runner:latest image will not be backward compatible when this security 
bug is fixed.  This means user experience for Ozone 0.4.0 release is subject to 
change over time.  This does not fit [Apache release 
policy|http://www.apache.org/legal/release-policy.html#compiled-packages], 
which stated:

{quote}As a convenience to users that might not have the appropriate tools to 
build a compiled version of the source, binary/bytecode packages MAY be 
distributed alongside official Apache releases. In all such cases, the 
binary/bytecode package MUST have the same version number as the source release 
and MUST only add binary/bytecode files that are the result of compiling that 
version of the source code release and its dependencies.
{quote}

Convenience binary for hadoop-runner:latest does not have the same version 
number for Ozone 0.4.0 which is not compliant to Apache release policy.  
Developer should not have to go through rabbit holes to find Ozone 0.4.0 
release is comprised of source code in 4 different branches in Hadoop source 
code, and only one of the branch was voted for release.  The rest was not 
voted.  Please reconsider to make the development and release process matching 
Apache release policy.
 

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16323) https everywhere in Maven settings

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850310#comment-16850310
 ] 

Eric Yang commented on HADOOP-16323:


This change is causing false positive for rat plugin to detect Apache license 
correctly because the URL in the text has changed.  I just did a check on 
https://www.apache.org/licenses/ and license text is still http, instead of 
https.  I think this patch should be reverted for any of ASF license 
modification.

> https everywhere in Maven settings
> --
>
> Key: HADOOP-16323
> URL: https://issues.apache.org/jira/browse/HADOOP-16323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HADOOP-16323-branch-2-002.patch, 
> HADOOP-16323-branch-2.8-002.patch, HADOOP-16323-branch-2.9-002.patch, 
> HADOOP-16323-branch-3.1-002.patch, HADOOP-16323-branch-3.2-002.patch, 
> HADOOP-16323.001.patch, HADOOP-16323.002.patch
>
>
> We should use https everywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850245#comment-16850245
 ] 

Eric Yang commented on HADOOP-16314:


Sorry [~Prabhu Joseph] I committed HDFS-14434 which breaks patch 003.  Could 
you rebase?  Thanks

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, HADOOP-16314-002.patch, 
> HADOOP-16314-003.patch, Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16312) Remove dumb-init from hadoop-runner image

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16850072#comment-16850072
 ] 

Eric Yang commented on HADOOP-16312:


{quote}Can you please explain it in more details? I think dumb-init executes 
subprocesses in the foreground but I may be wrong.{quote}

This is different what I observed in the image before and I could be wrong 
about dumb-init does a background push.  Since it is running in the foreground, 
we can discard this.

{quote}Are you sure? Do you have any method to prove it? According to my tests 
dumb-init signals all the child processes in the hierarchy.{quote}

Sometimes docker kill -s SIGINT [container-id] did not work, but it is hard to 
reproduce.  Do we really need the bash between Java process and dumb-init?  My 
impression is no, and we can reclaim resource sooner.

> Remove dumb-init from hadoop-runner image
> -
>
> Key: HADOOP-16312
> URL: https://issues.apache.org/jira/browse/HADOOP-16312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This issue is reported by [~eyang] in HDDS-1495.
> I think it's better to discuss under a separated issue as it's unrelated to 
> HDDS-1495.
> The original problem description from [~eyang]
> {quote}Dumb-init  is one way to always run contaized program in the 
> background and respawn the program when program fails. This is poor man’s 
> solution for keeping program alive.
> Cluster management software like Kubernetes or YARN have additional policy 
> and logic to start the same docker container on a different node. Therefore, 
> Dumb-init is not recommended for future Hadoop daemons instead allow cluster 
> management software to make decision where to start the container. Dumb-init 
> for demonize docker container will be removed, and change to use 
> entrypoint.sh Docker provides -d flag to demonize foreground process. Most of 
> the management system built on top of Docker, (ie. Kitematic, Apache YARN, 
> and Kubernetes) integrates with Docker container at foreground to  aggregate 
> stdout and stderr output of the containerized program.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-28 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849929#comment-16849929
 ] 

Eric Yang commented on HADOOP-16092:


{quote}Please don't mix the two things. Usability and popularity are two 
different things.{quote}

Ease of use and popularity are often related.  Hadoop already have hadoop-build 
image for developer to work locally.  Hadoop-runner image is a distraction that 
serves no purpose to Hadoop development.  The discussion is digressing from the 
goal to match docker image build source with Hadoop branches.  Hadoop 
development already have ability to build version specific hadoop-build docker 
images from trunk, and branch-3.2 code base.  The same ability exists in YARN 
as well.  Ozone is the only one that uses hadoop-runner image that is not in 
version controlled by the same source code repository.  This ticket should 
focus on making Ozone hadoop-runner image more aligned to Hadoop trunk source 
code.  

Fictional use case of having ability to run Hadoop 2.7.3 with latest 
hadoop-runner image can not be supported in reality.  The latest hadoop-runner 
source code does not have versioning.  It would be very difficult to maintain 
forever backward compatibility between latest hadoop-runner source commits and 
past version of Hadoop.  This has been observed that trunk version of 
hadoop-runner:latest is already not compatible with Ozone 0.4.0.  Trunk version 
of hadoop-runner image is trying to avoid hard code of uid, which deviates from 
original hadoop-runner:latest when Ozone 0.4.0 was released.  A new user may 
come in and said that he likes the hard coded uid, and current 
hadoop-runner:latest breaks his backward compatibility.  Hence, maintaining 
this forever backward compatible code cross Hadoop major version is a 
non-obtainable goal and cost expensive community drain.  Unlike latest release 
of a software, hadoop-runner:latest is a publicly exposed snapshot that claim 
to have forever compatibility.  The statement is too bold to uphold.

By bringing hadoop-runner docker build process into regular Hadoop maintenance 
branches, there is some version control to roughly hint developer which version 
of hadoop-runner image that they require to rebuild hadoop-runner:latest for 
Ozone 0.4.0 release.  It releases Hadoop community from the burden to carry 
forever backward compatible snapshot.

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Gabor Bota
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16849276#comment-16849276
 ] 

Eric Yang commented on HADOOP-16092:


The current user experience has been a very poor one.  There are a lot of 
people using Hadoop images on Docker Hub.  Some images has over 5 million use, 
but very few people use Hadoop-runner image.  This should be enough to point 
out the current model is a non-starter for most people.  Binary must be in 
docker image for [production 
usage|https://docs.docker.com/compose/production/], which has been documented 
in docker-compose website.  Why insist on the failed approach?

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Gabor Bota
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847729#comment-16847729
 ] 

Eric Yang commented on HADOOP-16314:


[~Prabhu Joseph] Thank you for the patch.  I think this part of code can be 
removed:
{code:java}
+  //if (this.securityEnabled) {
+  //   server.initSpnego(conf, hostName, usernameConfKey, keytabConfKey);
+  //}
 {code}
 The rest looks good to me.

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16325) Add ability to run python test and build docker in docker in start-build-env.sh

2019-05-24 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847656#comment-16847656
 ] 

Eric Yang commented on HADOOP-16325:


[~arp] Yes, running the command can serve as test in the build environment.  
Existing unit test script already cover this area.  This is the reason that no 
new test was written.

For testing docker build, run:

{code}mvn clean install -Pdist,docker{code}

For testing pytest is working, HDDS-1458 patch can serve as a test case to 
verify pytest is working:

{code}mvn clean verify -Pit{code}

> Add ability to run python test and build docker in docker in 
> start-build-env.sh
> ---
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch, HADOOP-16325.002.patch, 
> HADOOP-16325.003.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16325) Add ability to run python test and build docker in docker in start-build-env.sh

2019-05-23 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16325:
---
Summary: Add ability to run python test and build docker in docker in 
start-build-env.sh  (was: Add ability to run pytthon test and build docker in 
docker in start-build-env.sh)

> Add ability to run python test and build docker in docker in 
> start-build-env.sh
> ---
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch, HADOOP-16325.002.patch, 
> HADOOP-16325.003.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-23 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846799#comment-16846799
 ] 

Eric Yang edited comment on HADOOP-16287 at 5/23/19 3:40 PM:
-

I just committed this to trunk.  
Thank you [~Prabhu Joseph] for the patch.
Thank you [~lmccay] for the review.


was (Author: eyang):
I just committed this to trunk.  Thank you [~Prabhu Joseph].

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16287-006.patch, 
> HADOOP-16287-007.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-23 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16287:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

I just committed this to trunk.  Thank you [~Prabhu Joseph].

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16287-006.patch, 
> HADOOP-16287-007.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16325:
---
Attachment: HADOOP-16325.003.patch

> Add ability to run pytthon test and build docker in docker in 
> start-build-env.sh
> 
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch, HADOOP-16325.002.patch, 
> HADOOP-16325.003.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16325:
---
Attachment: HADOOP-16325.002.patch

> Add ability to run pytthon test and build docker in docker in 
> start-build-env.sh
> 
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch, HADOOP-16325.002.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16325:
---
Status: Patch Available  (was: Open)

> Add ability to run pytthon test and build docker in docker in 
> start-build-env.sh
> 
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16325:
---
Attachment: HADOOP-16325.001.patch

> Add ability to run pytthon test and build docker in docker in 
> start-build-env.sh
> 
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16325.001.patch
>
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16325:
--

Assignee: Eric Yang

> Add ability to run pytthon test and build docker in docker in 
> start-build-env.sh
> 
>
> Key: HADOOP-16325
> URL: https://issues.apache.org/jira/browse/HADOOP-16325
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>
> Ozone uses docker-compose, pytest and blockade to simulate network failure.  
> It would be great to have ability to run these integration test tools in the 
> developer docker environment.
> Ozone and YARN have optional profiles to build docker images using -Pdocker.  
> It would be a good addition to have ability to build docker image inside the 
> developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16325) Add ability to run pytthon test and build docker in docker in start-build-env.sh

2019-05-22 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16325:
--

 Summary: Add ability to run pytthon test and build docker in 
docker in start-build-env.sh
 Key: HADOOP-16325
 URL: https://issues.apache.org/jira/browse/HADOOP-16325
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eric Yang


Ozone uses docker-compose, pytest and blockade to simulate network failure.  It 
would be great to have ability to run these integration test tools in the 
developer docker environment.

Ozone and YARN have optional profiles to build docker images using -Pdocker.  
It would be a good addition to have ability to build docker image inside the 
developer docker environment as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845038#comment-16845038
 ] 

Eric Yang commented on HADOOP-16214:


Any PMC member would like to help resolve the tie in this issue?  Thanks

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-17 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HADOOP-16314:
--

Assignee: Prabhu Joseph

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16314-001.patch, Hadoop Web Security.xlsx, 
> scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842799#comment-16842799
 ] 

Eric Yang commented on HADOOP-16092:


{quote}The release process of older versions (eg. create hadoop 2.7.3 image is 
not addressed).{quote}

I keep getting stuck on this point.  Can you elaborate on creation of Hadoop 
2.7.3 image?  Do you mean creating an Docker image for Hadoop 2.7.3 release?  
Official Hadoop 2.7.3 release, that ship has sailed.  There is no re-release of 
a shipped version.  The unreleased version like 2.7.8 can be generated from a 
maven docker submodule via Hadoop trunk backport.  Or do you mean to create a 
Ozone Image that works with Hadoop 2.7.3?  If it's the later, then bind mount 
Hadoop directory into Ozone container.  This can be done for fun, but it is not 
production grade because there is no upper layer to manage orchestration and 
distribution of Ozone image on Hadoop 2.7.3 cluster.

{quote}The option to update the underlying operating system of the containers 
are not addressed.{quote}

In Dockerfile, we can manage OS package version using apt-get install 
docker-ce=5:18.09.6~3-0~ubuntu-xenial.  The Dockerfile is version controlled in 
Hadoop source code.  This means we can have accurate package version dependency 
to keep image up to date.  If a company wants to use a different version of 
curl, they can simply change Dockerfile to their specific version for their 
internal builds.  They can also perform "yum update" inside the running 
container, if they desire to hot patch a running container.  If I am not 
addressing your concern the right way, please elaborate on the questions.

{quote}It worked well. You can test it:{quote}

Reaction to my complain does not mean the current arrangement is working.  I 
shouldn't have to find out that it was not tag and released before release 
announcement (May 10th) went out.

{code}$ docker image inspect apache/ozone:0.4.0
[
{
"Id": 
"sha256:2bb4cc4480eff377a6305f61ec9ca340904f95ff16c13d825599ad04fb709ede",
"RepoTags": [
"apache/ozone:0.4.0"
],
"RepoDigests": [

"apache/ozone@sha256:31ccba3675507a182f35648172780dc48605da9fd5910cfdd5a41a594ea36874"
],
"Parent": "",
"Comment": "",
"Created": "2019-05-15T13:27:20.926054735Z",
...
{code}

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Gabor Bota
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16312) Remove dumb-init from hadoop-runner image

2019-05-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842649#comment-16842649
 ] 

Eric Yang edited comment on HADOOP-16312 at 5/17/19 10:21 PM:
--

{quote}Do you suggest to keep dumb-init and use exec?{quote}

Dumb-init only works one way to push container into background execution, and 
the current output looks like this on my system:

{code}
hadoop   1  0.0  0.0188 4 ?Ss   22:11   0:00 
/usr/local/bin/dumb-init -- /opt/starter.sh /opt/hadoop/bin/ozone datanode 
PATH=/usr/local/sbin:/usr/
hadoop   6  0.0  0.0  11680  1500 ?Ss   22:11   0:00 bash 
/opt/starter.sh /opt/hadoop/bin/ozone datanode 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/us
hadoop  13  8.0  3.6 4685828 369912 ?  Sl   22:11   0:11  \_ 
/usr/lib/jvm/jre//bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true 
-Dlog4j.configurationF
{code}

Bash between dumb-init and java will absorb all signal communications betweeen 
dumb-init and java.  It is not working as intended.  In addition, to improve 
debug experience in YARN, Kitematic and K8s, it is better to start the 
execution in foreground.  This allows log aggregators to collect script output. 
 If user really want to demonize the container, use docker -d option explicitly.

This means don't use dumb-init when bash -c set -e will cover what we try to 
accomplish with dumb-init.


was (Author: eyang):
{quote}Do you suggest to keep dumb-init and use exec?{quote}

Dumb-init only works one way to push container into background execution.  To 
improve debug experience in YARN, Kitematic and K8s, it is better to start the 
execution in foreground.  This allows log aggregators to collect script output. 
 If user really want to demonize the container, use docker -d option explicitly.

> Remove dumb-init from hadoop-runner image
> -
>
> Key: HADOOP-16312
> URL: https://issues.apache.org/jira/browse/HADOOP-16312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This issue is reported by [~eyang] in HDDS-1495.
> I think it's better to discuss under a separated issue as it's unrelated to 
> HDDS-1495.
> The original problem description from [~eyang]
> {quote}Dumb-init  is one way to always run contaized program in the 
> background and respawn the program when program fails. This is poor man’s 
> solution for keeping program alive.
> Cluster management software like Kubernetes or YARN have additional policy 
> and logic to start the same docker container on a different node. Therefore, 
> Dumb-init is not recommended for future Hadoop daemons instead allow cluster 
> management software to make decision where to start the container. Dumb-init 
> for demonize docker container will be removed, and change to use 
> entrypoint.sh Docker provides -d flag to demonize foreground process. Most of 
> the management system built on top of Docker, (ie. Kitematic, Apache YARN, 
> and Kubernetes) integrates with Docker container at foreground to  aggregate 
> stdout and stderr output of the containerized program.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16312) Remove dumb-init from hadoop-runner image

2019-05-17 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842649#comment-16842649
 ] 

Eric Yang commented on HADOOP-16312:


{quote}Do you suggest to keep dumb-init and use exec?{quote}

Dumb-init only works one way to push container into background execution.  To 
improve debug experience in YARN, Kitematic and K8s, it is better to start the 
execution in foreground.  This allows log aggregators to collect script output. 
 If user really want to demonize the container, use docker -d option explicitly.

> Remove dumb-init from hadoop-runner image
> -
>
> Key: HADOOP-16312
> URL: https://issues.apache.org/jira/browse/HADOOP-16312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This issue is reported by [~eyang] in HDDS-1495.
> I think it's better to discuss under a separated issue as it's unrelated to 
> HDDS-1495.
> The original problem description from [~eyang]
> {quote}Dumb-init  is one way to always run contaized program in the 
> background and respawn the program when program fails. This is poor man’s 
> solution for keeping program alive.
> Cluster management software like Kubernetes or YARN have additional policy 
> and logic to start the same docker container on a different node. Therefore, 
> Dumb-init is not recommended for future Hadoop daemons instead allow cluster 
> management software to make decision where to start the container. Dumb-init 
> for demonize docker container will be removed, and change to use 
> entrypoint.sh Docker provides -d flag to demonize foreground process. Most of 
> the management system built on top of Docker, (ie. Kitematic, Apache YARN, 
> and Kubernetes) integrates with Docker container at foreground to  aggregate 
> stdout and stderr output of the containerized program.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
Component/s: security

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Eric Yang
>Priority: Major
> Attachments: Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
Attachment: scan.txt

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Attachments: Hadoop Web Security.xlsx, scan.txt
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
Attachment: Hadoop Web Security.xlsx

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
> Attachments: Hadoop Web Security.xlsx
>
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16314:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-16095

> Make sure all end point URL is covered by the same AuthenticationFilter
> ---
>
> Key: HADOOP-16314
> URL: https://issues.apache.org/jira/browse/HADOOP-16314
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Priority: Major
>
> In the enclosed spreadsheet, it shows the list of web applications deployed 
> by Hadoop, and filters applied to each entry point.
> Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
> do not support ?doAs parameter.  This creates problem for secure gateway like 
> Knox to proxy Hadoop web interface on behave of the end user.  When the 
> receiving end does not check for ?doAs flag, web interface would be accessed 
> using proxy user credential.  This can lead to all kind of security holes 
> using path traversal to exploit Hadoop. 
> In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to 
> solve the web impersonation problem.  This task is to track changes required 
> in Hadoop code base to apply authentication filter globally for each of the 
> web service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16287:
---
Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-16095

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16287-006.patch, 
> HADOOP-16287-007.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840542#comment-16840542
 ] 

Eric Yang commented on HADOOP-16095:


The patch 004 was the original patch that posted in Hadoop security mailing 
list on Feb 11, 2019.  This patch covers a new AuthenticationFilter that 
enables impersonation at web protocol.  It also covers patch to apply 
AuthenticationFilter globally to HDFS and YARN applications.  The core filter 
is refined in HADOOP-16287.  The application of the filter is filed as another 
issue HADOOP-16314 to ensure all entry points are covered.

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16095) Support impersonation for AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16095:
---
Attachment: HADOOP-16095.004.patch

> Support impersonation for AuthenticationFilter
> --
>
> Key: HADOOP-16095
> URL: https://issues.apache.org/jira/browse/HADOOP-16095
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HADOOP-16095.004.patch
>
>
> External services or YARN service may need to call into WebHDFS or YARN REST 
> API on behave of the user using web protocols. It would be good to support 
> impersonation mechanism in AuthenticationFilter or similar extensions. The 
> general design is similar to UserGroupInformation.doAs in RPC layer.
> The calling service credential is verified as a proxy user coming from a 
> trusted host verifying Hadoop proxy user ACL on the server side. If proxy 
> user ACL allows proxy user to become doAs user. HttpRequest object will 
> report REMOTE_USER as doAs user. This feature enables web application logic 
> to be written with minimal changes to call Hadoop API with 
> UserGroupInformation.doAs() wrapper.
> h2. HTTP Request
> A few possible options:
> 1. Using query parameter to pass doAs user:
> {code:java}
> POST /service?doAs=foobar
> Authorization: [proxy user Kerberos token]
> {code}
> 2. Use HTTP Header to pass doAs user:
> {code:java}
> POST /service
> Authorization: [proxy user Kerberos token]
> x-hadoop-doas: foobar
> {code}
> h2. HTTP Response
> 403 - Forbidden (Including impersonation is not allowed)
> h2. Proxy User ACL requirement
> Proxy user kerberos token maps to a service principal, such as 
> yarn/host1.example.com. The host part of the credential and HTTP request 
> origin are both validated with *hadoop.proxyuser.yarn.hosts* ACL. doAs user 
> group membership or identity is checked with either 
> *hadoop.proxyuser.yarn.groups* or *hadoop.proxyuser.yarn.users*. This governs 
> the caller is coming from authorized host and belong to authorized group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-15 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840528#comment-16840528
 ] 

Eric Yang commented on HADOOP-16287:


+1 on patch 007 to include the new ProxyUserAuthenticationFilter.  For this to 
work globally in a web application such as YARN UI or HDFS UI, we must ensure 
that the same filter mechanism applies to all endpoints.  I opened HADOOP-16314 
to track the required changes in application code.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16287-006.patch, 
> HADOOP-16287-007.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16314:
--

 Summary: Make sure all end point URL is covered by the same 
AuthenticationFilter
 Key: HADOOP-16314
 URL: https://issues.apache.org/jira/browse/HADOOP-16314
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eric Yang


In the enclosed spreadsheet, it shows the list of web applications deployed by 
Hadoop, and filters applied to each entry point.

Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
do not support ?doAs parameter.  This creates problem for secure gateway like 
Knox to proxy Hadoop web interface on behave of the end user.  When the 
receiving end does not check for ?doAs flag, web interface would be accessed 
using proxy user credential.  This can lead to all kind of security holes using 
path traversal to exploit Hadoop. 

In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to solve 
the web impersonation problem.  This task is to track changes required in 
Hadoop code base to apply authentication filter globally for each of the web 
service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-14 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16839669#comment-16839669
 ] 

Eric Yang commented on HADOOP-16092:


[~elek] you are miss-quoting me incorrectly.  My preference is to remove ozone 
from source tree naming.  This means there is no source repository prefix or 
suffix with ozone.  My message has always been the same, *build docker images 
inline*.  There is only one flag difference for existing developers to 
trigger/untrigger the docker build.  I never advocated for a separate source 
code repository for docker images, but ask you and the community to show me the 
release flow of using separate source code repository for docker image build.

As I predicted, the separate source code repository release model does not 
work.  We did not build and release hadoop-docker-ozone for Ozone 0.4.0 
release.  The docker-compose.yaml file must be updated with newer version and 
source getting voted on.  None of this had happened.  Once again, this has 
proven that separate source code repository for docker image build doesn't work 
while majority part of ozone source is in Hadoop repository.  The current 
implementation and ideas certainly did not come from me.

*It is possible to move Ozone code into Apache incubator and propose as a new 
incubation project to get a complete separation from Hadoop.  However, put on 
my Apache member hat that Ozone community has not grown to the point to 
function as an independent project at this time.*

Quote from [Marton own 
message|https://lists.apache.org/thread.html/ca3a8f37084b2562526384d8d7b2e647f4edc3c145a9dc41f04c23d7@%3Chdfs-dev.hadoop.apache.org%3E]:
  You insisted on separated branch or separate repository.  
You may have taken my message too literately with your own train of thoughts of 
keeping ozone versioning separate from Hadoop versioning.

My logic is trunk code will branch to ozone-0.4, and trunk moves to 
0.5.0-SNAPSHOT.  Docker images for 0.4.1 will released from ozone-0.4 branch 
because maven, docker and git are all tagged by the same version prefix.  
Hadoop can have its own docker module, and back ported to branch-2.x if we want 
to release 2.7.8 with a hadoop docker image.  That is a separate work item from 
Ozone.

Ozone versioning can remain separated from Hadoop versioning by ensuring that 
hadoop-ozone project has it's own set of 0.5.0-SNAPSHOT version in trunk.  In 
ozone-0.4 branch, the next version is 0.4.1-SNAPSHOT for building the inline 
docker image.  [~elek] does this address your concerns?



> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Gabor Bota
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16092) Move the source of hadoop/ozone containers to the same repository

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838948#comment-16838948
 ] 

Eric Yang commented on HADOOP-16092:


We don't need hadoop-docker repository.  We can build docker image inline right 
from hadoop trunk code.  In HDDS-1495, there is a maven submodule setup to 
build docker image inline for Ozone.  This build flows is building jar files, 
tarball, follow by docker image.  Container tag name and hadoop branch name are 
based on version number of the release.  There is no obstacle that I see to 
execute inline docker build plan from hadoop trunk code.  Any concerns [~elek]?

> Move the source of hadoop/ozone containers to the same repository
> -
>
> Key: HADOOP-16092
> URL: https://issues.apache.org/jira/browse/HADOOP-16092
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Gabor Bota
>Priority: Major
>
> This is proposed by [~eyang] in 
> [this|https://lists.apache.org/thread.html/33ac54bdeacb4beb023ebd452464603aaffa095bd104cb43c22f484e@%3Chdfs-dev.hadoop.apache.org%3E]
>  mailing thread.
> bq. Hadoop community can decide what is best for Hadoop.  My preference is to 
> remove ozone from source tree naming, if Ozone is intended to be subproject 
> of Hadoop for long period of time.  This enables Hadoop community to host 
> docker images for various subproject without having to check out several 
> source tree to trigger a grand build
> As of now the source of  hadoop docker images are stored in the hadoop git 
> repository (docker-* branches) for hadoop and in hadoop-docker-ozone git 
> repository for ozone (all branches).
> As it's discussed in HDDS-851 the biggest challenge to solve here is the 
> mapping between git branches and dockerhub tags. It's not possible to use the 
> captured part of a github branch.
> For example it's not possible to define a rule to build all the ozone-(.*) 
> branches and use a tag $1 for it. Without this support we need to create a 
> new mapping for all the releases manually (with the help of the INFRA).
> Note: HADOOP-16091 can solve this problem as it doesn't require branch 
> mapping any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1683#comment-1683
 ] 

Eric Yang commented on HADOOP-16287:


[~Prabhu Joseph] Thank you for the patch.  I think we also need documentation 
on how to configure this in core-site.xml.  Can you add some document in 
hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md?  
Thanks

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16312) Remove dumb-init from hadoop-runner image

2019-05-13 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838676#comment-16838676
 ] 

Eric Yang commented on HADOOP-16312:


{quote}Would you be so kind Eric Yang to explain it more details? Can you 
please give me an example how scm process is respawned in the ozone compose 
clusters (where hadoop-runner is used together with dumb-init).{quote}

That statement is flawed.  It does not respawn the program.  Instead, i meant 
reparenting orphaned child processes and keep them alive or reap them.  It is 
possible to use:

{code}
CMD ["/bin/bash", "-c", "set -e && /opt/apache/ozone/bin/ozone"] 
{code}

to accomplish the same.  Kill the container will result in SIGKILL on orphaned 
child processes to get a cleaner system.  We can talk about if it is ok to kill 
Ozone threads with SIGKILL with some clean up strategy for empty or corrupted 
files or we want to code logic into signal handling for Ozone threads.  I don't 
think we can do much to manage some JVM native threads, and the current 
starter.sh will result in process threads that look like this:

{code}
/dumb-init
+--- /usr/bin/dumb-init
  +--- /bin/bash
+--- java ...
{code}

Bash will not forward signal handling to Java.  Instead, the proper usage is:

{code}
exec java ...
{code}

The process tree will look like:

{code}
/dumb-init
+--- java ...
{code}

This will ensure the signal handling is correct.

> Remove dumb-init from hadoop-runner image
> -
>
> Key: HADOOP-16312
> URL: https://issues.apache.org/jira/browse/HADOOP-16312
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Major
>
> This issue is reported by [~eyang] in HDDS-1495.
> I think it's better to discuss under a separated issue as it's unrelated to 
> HDDS-1495.
> The original problem description from [~eyang]
> {quote}Dumb-init  is one way to always run contaized program in the 
> background and respawn the program when program fails. This is poor man’s 
> solution for keeping program alive.
> Cluster management software like Kubernetes or YARN have additional policy 
> and logic to start the same docker container on a different node. Therefore, 
> Dumb-init is not recommended for future Hadoop daemons instead allow cluster 
> management software to make decision where to start the container. Dumb-init 
> for demonize docker container will be removed, and change to use 
> entrypoint.sh Docker provides -d flag to demonize foreground process. Most of 
> the management system built on top of Docker, (ie. Kitematic, Apache YARN, 
> and Kubernetes) integrates with Docker container at foreground to  aggregate 
> stdout and stderr output of the containerized program.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16214) Kerberos name implementation in Hadoop does not accept principals with more than two components

2019-05-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16837459#comment-16837459
 ] 

Eric Yang edited comment on HADOOP-16214 at 5/10/19 9:58 PM:
-

{quote}The auth_to_local rules are and always have served as a whitelist for 
authorization.{quote}

In 
[HADOOP-16023|https://issues.apache.org/jira/browse/HADOOP-16023?focusedCommentId=16737461=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16737461],
 [~bolke] has stated that system auth_to_local rules do not necessarily need to 
map to an existing user.  This is true behavior of MIT Kerberos.  While I 
appreciate your zealous style to defend existing code, but it is not the only 
way for authorization to work.  I believe the current patch will work as it was 
backward compatible, and please do point out, if it is not backward compatible. 
 We don't need to debate right way of using auth_to_local here because 
HADOOP-15996 has already been review and committed.  [~lmc...@apache.org] has 
also ask you to review, which you did not have a comment.  Now the issue is 
support multiple components for MIT Kerberos behavior.  Do you see any bug in 
the patch that should be addressed other than the philosophical view 
difference? 


was (Author: eyang):
{quote}The auth_to_local rules are and always have served as a whitelist for 
authorization.{quote}

In 
[HADOOP-16023|https://issues.apache.org/jira/browse/HADOOP-16023?focusedCommentId=16737461=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16737461],
 [~bolke] has stated that system auth_to_local rules do not necessarily need to 
map to an existing user.  This is true behavior of MIT Kerberos.  While I 
appreciate your zealous style to defend existing code, but it is not the only 
way for authorization to work.  I believe the current patch will work as it was 
backward compatible, and please do point out, if it is not backward compatible. 
 We don't need to debate HADOOP-16023 here because that has already been review 
and committed.  Now the issue is support multiple components for MIT Kerberos 
behavior.  Do you see any bug in the patch that should be addressed other than 
the philosophical view difference? 

> Kerberos name implementation in Hadoop does not accept principals with more 
> than two components
> ---
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: Issac Buenrostro
>Priority: Major
> Attachments: Add-service-freeipa.png, HADOOP-16214.001.patch, 
> HADOOP-16214.002.patch, HADOOP-16214.003.patch, HADOOP-16214.004.patch, 
> HADOOP-16214.005.patch, HADOOP-16214.006.patch, HADOOP-16214.007.patch, 
> HADOOP-16214.008.patch, HADOOP-16214.009.patch, HADOOP-16214.010.patch, 
> HADOOP-16214.011.patch, HADOOP-16214.012.patch, HADOOP-16214.013.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of 
> converting a Kerberos principal to a user name in Hadoop for all of the 
> services requiring authentication.
> Although the Kerberos spec 
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
>  allows for an arbitrary number of components in the principal, the Hadoop 
> implementation will throw a "Malformed Kerberos name:" error if the principal 
> has more than two components (because the regex can only read serviceName and 
> hostName).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   >