[
https://issues.apache.org/jira/browse/HADOOP-16214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16819548#comment-16819548
]
Eric Yang edited comment on HADOOP-16214 at 4/16/19 10:36 PM:
--------------------------------------------------------------
[~daryn] daryn/@REALM and daryn/ipv6-host@REALM are legal principals. They are
not regression, but should have been allowed. As long as hostname field is not
populated, they don't introduce security problems to Hadoop. DEFAULT rule does
not apply to multiple components principal as stated by Kerberos man page. If
you are referring to using auth_to_local as firewall rules to fend off users as
regression, I have warned you before that auth_to_local is a utility function,
not a firewall rule. What happen to people that have FreeIPA installed can
able to resolve id user//[email protected] at OS level? Hadoop rule
mechanism implementation can only map this user to another name that has
potential of conflict instead of respect hierarchical user namespace. Proxy
user echo was designed for the ACL purpose, and you did not listen. This is
the reason that we have to circle back to CVE discussion for Hadoop DEFAULT
rule because you are defending against wrong idea.
Real enterprises have been running kerberos for three decades, and there are
schema built on top of Kerberos conventions. Without fixing those
incompatibilities, it prevents Hadoop to work in real enterprises. Please
don't take this personal. We are trying to fix problems here, please keep it
civil.
was (Author: eyang):
[~daryn] daryn/@REALM and daryn/ipv6-host@REALM are legal principals. They are
not regression, but should have been allowed. As long as hostname field is not
populated, they don't introduce security problems to Hadoop. DEFAULT rule does
not apply to multiple components principal as stated by Kerberos man page. I
don't understand your statement of regression.
Real enterprises have been running kerberos for three decades, and there are
schema built on top of Kerberos conventions. Without fixing those
incompatibilities, it prevents Hadoop to work in real enterprises. Please
don't take this personal. We are trying to fix problem here, please keep it
civil.
> Kerberos name implementation in Hadoop does not accept principals with more
> than two components
> -----------------------------------------------------------------------------------------------
>
> Key: HADOOP-16214
> URL: https://issues.apache.org/jira/browse/HADOOP-16214
> Project: Hadoop Common
> Issue Type: Bug
> Components: auth
> Reporter: Issac Buenrostro
> Priority: Major
> Attachments: HADOOP-16214.001.patch, HADOOP-16214.002.patch,
> HADOOP-16214.003.patch, HADOOP-16214.004.patch, HADOOP-16214.005.patch,
> HADOOP-16214.006.patch, HADOOP-16214.007.patch, HADOOP-16214.008.patch,
> HADOOP-16214.009.patch, HADOOP-16214.010.patch, HADOOP-16214.011.patch
>
>
> org.apache.hadoop.security.authentication.util.KerberosName is in charge of
> converting a Kerberos principal to a user name in Hadoop for all of the
> services requiring authentication.
> Although the Kerberos spec
> ([https://web.mit.edu/kerberos/krb5-1.5/krb5-1.5.4/doc/krb5-user/What-is-a-Kerberos-Principal_003f.html])
> allows for an arbitrary number of components in the principal, the Hadoop
> implementation will throw a "Malformed Kerberos name:" error if the principal
> has more than two components (because the regex can only read serviceName and
> hostName).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]